- 25 1月, 2019 9 次提交
-
-
由 Daniel Gustafsson 提交于
Since GPDB is planning CTEs quite differently from upstream, the initplan generation in SS_process_ctes() is not required and has been commented out. This extends the commenting to the function and not just the caller to assist code reading and understanding (reading the function without inspecting the caller can easily lead to incorrect assumptions). Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Georgios Kokolatos 提交于
`suggest parentheses around ‘&&’ within ‘||’` produced by gcc and clang. Reproduced in gcc versions 4.8 - 7.3. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Hubert Zhang 提交于
Background workers have no client at all, so no need to forward QE message. On the other hand, MyProcPort is not initialized in background workers. While forward notice currently assume MyProcPort is already initialized. This is firstly detected by diskquota background worker. It also reported by #6756 which causes GDD infinite loop. GDD currently set the MyProcPort manually. This commit also removes MyProcPort in GDD, since there is no client in GDD process neither. So GDD and background worker will both skip generate/forward QE notice. Co-authored-by: Zhenghua Lyu zlv@pivotal.io Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NPengcheng Tang <petang@pivotal.io>
-
由 Shaoqi Bai 提交于
* Remove DB_IN_STANDBY_MODE, DB_IN_STANDBY_PROMOTED and use DB_IN_ARCHIVE_RECOVERY instead Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Wang Hao 提交于
There was a field queryCommandId in PGPROC existing in 5.X and earlier versions. After some code refactoring, it became dead code so was deleted. However, it is required by some monitoring extension so this commit brings it back with similar implementation as 5.X. Closes https://github.com/greenplum-db/gpdb/issues/6569Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Ning Fu 提交于
* Save git root, submodule and changelog information. The following information will be saved in ${GREENPLUM_INSTALL_DIR}/etc/git-info.json: * Root repo (uri, sha1) * Submodules (submodule source path, sha1, tag) Save git commits since last release tag into ${GREENPLUM_INSTALL_DIR}/etc/git-current-changelog.txt
-
由 Shaoqi Bai 提交于
Disallow alter parition table's leaf with different distribution compared with entire partitioned table. (#6780) Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Alexandra Wang 提交于
The GPDB_92_MERGE_FIXME for whether we need to deep copy or memcopy suffices in case of subroot can be removed as from the subroot all we care about is the `parse->rtable`, therefore, creating a deep copy of it is unnecessary. This commit also removes the `Assert()` which is valid in Upstream but for GPDB, since we create a new copy of the subplan if two SubPlans refer to the same initplan. Therefore, when we try to set references for subqueryscans in plans with copies of subplans refering to same initplan, we cannot directly Assert on the RelOptInfo's subplan being same as the subqueryscan's subplan. Added a test case for the same, which will ensure we do not merge back the Assert back from Upstream in future merges. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Ekta Khanna 提交于
Here the check for `RTE_CTE` is correct since in GPDB, while planning, CTE is considered similar to subquery. Hence, removing the FIXME. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
- 24 1月, 2019 18 次提交
-
-
由 Daniel Gustafsson 提交于
This cleans up the error messages in the sparse vector code a little by ensuring they mostly conform to the style guide for error handling. Also fixes a nearby typo and removes commented out elogs which are clearly dead code. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
The gp_sparse_vector module was covered by the relicensing done as part of the Greenplum open sourcing, but a few mentions of previous licensing remained in the code. The legal situation of this code has been reviewed by Pivotal legal and is cleared, so remove incorrect statements and replace with the standard copyright file headers. This also cleans up a few comments while at it. Reviewed-by: Cyrus Wadia Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
The float_specials.h header was removed shortly after this contrib module was imported in 2010, and has been dead code since. Remove. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
This removes a few unused functions, and inlines the function body of another one which only had a single caller. Also properly mark a few functions as static. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Remove redundant test on array_agg which didn't have a stable output, and remove an ORDER BY to let atmsort deal with differences instead. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
The histogram structure was allocated statically via malloc(), but it had no data retention between calls as it was purely a microoptimization to avoid the cost of repeated allocations. This lead to the allocated memory leaking as it's not cleaned up automatically. Fix by pallocing the memory instead and take the cost of repeat allocation. Also ensure to properly clean up allocated memory on failure cases. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
palloc() is guaranteed to only return on successful allocation, so there is no need to check it. ereport(ERROR..) is guaranteed never to return, and to clean up on it's way out, so pfree()ing after an ereport() is not just unreachable code, it would be a double-free if it was reached. Also add proper checks on the malloc() and strdup() calls as those are subject to the usual memory pressure controls by the programmer. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Adam Lee 提交于
This part of codes are not covered by PR pipeline, tested manually.
-
由 Adam Lee 提交于
fmtQualifiedId() and fmtId() share the same buffer, we cannot use any of them until we finished calling.
-
由 Melanie 提交于
When you have subquery under a SUBLINK that might get pulled up, you should not allow indexscans to be chosen for the relation which is the rangetable for the subquery. If that relation is distributed and the subquery is pulled up, you will need to redistribute or broadcast that relation and materialize it on the segments, and cdbparallelize will not add a motion and materialize an indexscan, so you cannot use indexscan in these cases. You can't materialize an indexscan because it will materialize only one tuple at a time and when you compare that to the param you get from the relation on the segments, you can get wrong results. Because we don't pick indexscan very often, we don't see this issue very often. You need a subquery referring to a distributed table in a subplan which, during planning, gets pulled up and then when adding paths, the indexscan is cheapest. Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NJinbao Chen <jinchen@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 David Kimura 提交于
Stale replication slots can exist on mirrors that were once acting as primaries. In this case restart_lsn is non-zero value used in past replication slot setup. The stale replication slot will continue to retain xlog on mirror which is problematic and unnecessary. This patch drops internal replication slot on startup of mirror. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Jesse Zhang 提交于
We backported 128-bit integer support to speed up aggregates (commits 8122e143 and 959277a4) from upstream 9.6 into Greenplum (in commits 9b164486 and 325e6fcd). However, we forgot to also port a follow-up fix postgres/postgres@7518049980b, mostly because it's nuanced and hard to reproduce. There are two ways to tell the brokenness: 1. On a lucky day, tests would fail on my workstation, but not my laptop (or vice versa). 1. If you stare at the generated code for `int8_avg_combine` (and friends), you'll notice the compiler uses "aligned" instructions like `movaps` and `movdqa` (on AMD64). Today's my lucky day. Original commit message from postgres/postgres@7518049980b (by Tom Lane): > Our initial work with int128 neglected alignment considerations, an > oversight that came back to bite us in bug #14897 from Vincent Lachenal. > It is unsurprising that int128 might have a 16-byte alignment requirement; > what's slightly more surprising is that even notoriously lax Intel chips > sometimes enforce that. > Raising MAXALIGN seems out of the question: the costs in wasted disk and > memory space would be significant, and there would also be an on-disk > compatibility break. Nor does it seem very practical to try to allow some > data structures to have more-than-MAXALIGN alignment requirement, as we'd > have to push knowledge of that throughout various code that copies data > structures around. > The only way out of the box is to make type int128 conform to the system's > alignment assumptions. Fortunately, gcc supports that via its > __attribute__(aligned()) pragma; and since we don't currently support > int128 on non-gcc-workalike compilers, we shouldn't be losing any platform > support this way. > Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and > called it a day, I did a little bit of extra work to make the code more > portable than that: it will also support int128 on compilers without > __attribute__(aligned()), if the native alignment of their 128-bit-int > type is no more than that of int64. > Add a regression test case that exercises the one known instance of the > problem, in parallel aggregation over a bigint column. > This will need to be back-patched, along with the preparatory commit > 91aec93e. But let's see what the buildfarm makes of it first. > Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org (cherry picked from commit 75180499)
-
由 Jesse Zhang 提交于
This cherry-picks 91aec93e. We had to be extra careful to preserve still-in-use macros UnusedArg and STATIC_IF_INLINE and friends. > Generalize section 1 to handle stuff that is principally about the > compiler (not libraries), such as attributes, and collect stuff there > that had been dropped into various other parts of c.h. Also, push > all the gettext macros into section 8, so that section 0 is really > just inclusions rather than inclusions and random other stuff. > The primary goal here is to get pg_attribute_aligned() defined before > section 3, so that we can use it with int128. But this seems like good > cleanup anyway. > This patch just moves macro definitions around, and shouldn't result > in any changes in generated code. But I'll push it out separately > to see if the buildfarm agrees. > Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org (cherry picked from commit 91aec93e)
-
由 David Kimura 提交于
Currently GDD sets DistributedTransactionContext to DTX_CONTEXT_QD_DISTRIBUTED_CAPABLE and as a result allocates distributed transaction id. It creates entry in ProcGlobal->allTmGxact with state DTX_STATE_ACTIVE_NOT_DISTRIBUTED. The effect of this is that any query taking a snapshot will see this transaction as in progress. Since GDD transaction is short lived it is not an issue in general, but in CI it causes flaky behavior for some of the vacuum tests. The flaky behavior shows up as unvacuumed tables where the vacuum snapshot was taken while GDD transaction was running thereby forcing vacuum to lower its oldest XMIN. Current behavior of GDD consuming a distributed transaction id (every 2 minutes by default) is also wasteful behavior. Currently GDD also sends a snapshot to QE, but this isn't required and is wasteful as well. In this change for GDD we keep DistributedTransactionContext as DTX_CONTEXT_LOCAL_ONLY and avoid dispatching snapshots to QEs. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
Currently, dbid is used in tablespace path. Hence, while creating segment need dbid. To get the dbid need to add segment to catalog first. But adding segment to catalog before creating causes issues. Hence, modify gpexpand to not let database generate the dbid, but instead pass the dbid upfront generated while registering in catalog. This way dbid used while creating the segment will be same as dbid in catalog. Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Lav Jain 提交于
-
由 Georgios Kokolatos 提交于
An argument can be made that hidden tuples in AO tables are similar to dead tuples for regular tables. However, the use of this information with regards to pgstats seems to be semantically distinct and consequently should not be exposed. As example after a VACUUM (FULL, ANALYZE) of an AO table, hidden tuples will remain if AO compaction thresholds are not met. It seems preferable to explicitly pass 0 instead of the already zero'd LVRelStats member for clarity. Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
- 23 1月, 2019 13 次提交
-
-
由 Jialun 提交于
When a table has been transformed to a view by creating ON SELECT rule, the record in gp_distribution_policy should be deleted also, for there is no such record for a view. Also, the relstorage in pg_class should be changed to 'v'.
-
由 Dmitriy Dubson 提交于
Missing documentation on newly required `libzstd` dependency. Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Pengzhou Tang 提交于
gp_toolkit.gp_skew_* series views/functions are used to query how data is skewed in database. The idea is using a query like: "select gp_segment_id, count(*) cnt from foo group by gp_segment_id", and compare the cnt by gp_segment_id. For the replicated table, only one replica is picked to count the tuple number by the planner, so the old calculate logic produced a confusing result that a replicated table is skewed which is not expected: gpadmin=# select * From gp_toolkit.gp_skew_idle_fractions; sifoid | sifnamespace | sifrelname | siffraction --------+--------------+------------+------------------------ 16385 | public | rpt | 0.66666666666666666667 What's more, gp_segment_id is ambiguous for replicated table, so in commit b120194a, we disallow user to access system columns include gp_segment_id, so gp_toolkit.gp_skew_* views now report an error now. This commit correct the results of gp_toolkit.gp_skew_* views/functions for the replicated table although the results are pointless, however, this way should be more friendly for users.
-
由 Paul Guo 提交于
Remove the obsolete comment for RETURNING and put the test in a parallel running group, following pg upstream.
-
由 Paul Guo 提交于
gp_toolkit test tests various log related views like gp_log_system(), etc. If we run the test earlier, less logs are generated and thus the test runs fater. In my test environment, the test time reduces from ~22 seconds to 6.X seconds with this patch. Also, I check the whole test case, this change will not affect the test coverage.
-
由 Pengzhou Tang 提交于
In 9.0 merge, we add bellow rule for FOR UPDATE: select for update will lock the whole table, we do it at addRangeTableEntry. The reason is that gpdb is an MPP database, the result tuples may not be on the same segment. And for cursor statement, reader gang cannot get Xid to lock the tuples, so we didn't add a LockRows node for distributed table to avoid it this rule should also apply to replicated table.
-
由 David Yozie 提交于
* Synchronize mpp_execute option description and precedence rules in end-user documentation * describe the order of precedence in each command * one any -> any one * Feedback from Lisa
-
由 ZhangJackey 提交于
In the previous code, we can modify the parent partition's column by ALTER TABLE ONLY, so the column of the parent partition and children partition may be different. In order to prohibit this situation, we check the DROP COLUMN/ ADD COLUMN/ALTER TYPE COLUMN statement to prohibit the user only modify the column of parent partition or children partitions. There was a discussion on gpdb-dev@: https://groups.google.com/a/greenplum.org/forum/#!msg/gpdb-dev/0SzL_gSbqKo/d-2RpwKrFwAJ
-
由 Bradford D. Boyle 提交于
It doesn't build because --disable-orca is not being passed to configure and pivotaldata/gpdb-devel doesn't have xerces, on which Orca depends. It seems this Dockerfile is not used. The Dockerfiles in ./src/tools/docker/*/Dockerfile are more recently maintained. Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 Kris Macoskey 提交于
For GPDB 6 Beta, only Centos 6/7 need to be passing for the same commit to be a valid release candidate. This was originally done in this commit: fa63e7ab But the commit was missing an update to the task yaml for the Release_Candidate job to accompodate removal of the sles11 input. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Since gp_dbid and gp_contentid is stored in conf files on QE, its helpful to have validation to compare values between QD catalog table gp_segment_configuration and QE. This validation is performed using FTS. FTS message includes gp_dbid and gp_contentid values from catalog. QE validates the value while handling the FTS message and if finds inconsistency PANICS. This check is mostly targeted during development to catch missed handling of gp_dbid and gp_contentid values in config files. For future features like pg_upgrade and gpexpand which copy master directory and convert it to segment. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Ashwin Agrawal 提交于
Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-