- 24 1月, 2019 3 次提交
-
-
由 Ashwin Agrawal 提交于
Currently, dbid is used in tablespace path. Hence, while creating segment need dbid. To get the dbid need to add segment to catalog first. But adding segment to catalog before creating causes issues. Hence, modify gpexpand to not let database generate the dbid, but instead pass the dbid upfront generated while registering in catalog. This way dbid used while creating the segment will be same as dbid in catalog. Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Lav Jain 提交于
-
由 Georgios Kokolatos 提交于
An argument can be made that hidden tuples in AO tables are similar to dead tuples for regular tables. However, the use of this information with regards to pgstats seems to be semantically distinct and consequently should not be exposed. As example after a VACUUM (FULL, ANALYZE) of an AO table, hidden tuples will remain if AO compaction thresholds are not met. It seems preferable to explicitly pass 0 instead of the already zero'd LVRelStats member for clarity. Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
- 23 1月, 2019 15 次提交
-
-
由 Jialun 提交于
When a table has been transformed to a view by creating ON SELECT rule, the record in gp_distribution_policy should be deleted also, for there is no such record for a view. Also, the relstorage in pg_class should be changed to 'v'.
-
由 Dmitriy Dubson 提交于
Missing documentation on newly required `libzstd` dependency. Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Pengzhou Tang 提交于
gp_toolkit.gp_skew_* series views/functions are used to query how data is skewed in database. The idea is using a query like: "select gp_segment_id, count(*) cnt from foo group by gp_segment_id", and compare the cnt by gp_segment_id. For the replicated table, only one replica is picked to count the tuple number by the planner, so the old calculate logic produced a confusing result that a replicated table is skewed which is not expected: gpadmin=# select * From gp_toolkit.gp_skew_idle_fractions; sifoid | sifnamespace | sifrelname | siffraction --------+--------------+------------+------------------------ 16385 | public | rpt | 0.66666666666666666667 What's more, gp_segment_id is ambiguous for replicated table, so in commit b120194a, we disallow user to access system columns include gp_segment_id, so gp_toolkit.gp_skew_* views now report an error now. This commit correct the results of gp_toolkit.gp_skew_* views/functions for the replicated table although the results are pointless, however, this way should be more friendly for users.
-
由 Paul Guo 提交于
Remove the obsolete comment for RETURNING and put the test in a parallel running group, following pg upstream.
-
由 Paul Guo 提交于
gp_toolkit test tests various log related views like gp_log_system(), etc. If we run the test earlier, less logs are generated and thus the test runs fater. In my test environment, the test time reduces from ~22 seconds to 6.X seconds with this patch. Also, I check the whole test case, this change will not affect the test coverage.
-
由 Pengzhou Tang 提交于
In 9.0 merge, we add bellow rule for FOR UPDATE: select for update will lock the whole table, we do it at addRangeTableEntry. The reason is that gpdb is an MPP database, the result tuples may not be on the same segment. And for cursor statement, reader gang cannot get Xid to lock the tuples, so we didn't add a LockRows node for distributed table to avoid it this rule should also apply to replicated table.
-
由 David Yozie 提交于
* Synchronize mpp_execute option description and precedence rules in end-user documentation * describe the order of precedence in each command * one any -> any one * Feedback from Lisa
-
由 ZhangJackey 提交于
In the previous code, we can modify the parent partition's column by ALTER TABLE ONLY, so the column of the parent partition and children partition may be different. In order to prohibit this situation, we check the DROP COLUMN/ ADD COLUMN/ALTER TYPE COLUMN statement to prohibit the user only modify the column of parent partition or children partitions. There was a discussion on gpdb-dev@: https://groups.google.com/a/greenplum.org/forum/#!msg/gpdb-dev/0SzL_gSbqKo/d-2RpwKrFwAJ
-
由 Bradford D. Boyle 提交于
It doesn't build because --disable-orca is not being passed to configure and pivotaldata/gpdb-devel doesn't have xerces, on which Orca depends. It seems this Dockerfile is not used. The Dockerfiles in ./src/tools/docker/*/Dockerfile are more recently maintained. Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 Kris Macoskey 提交于
For GPDB 6 Beta, only Centos 6/7 need to be passing for the same commit to be a valid release candidate. This was originally done in this commit: fa63e7ab But the commit was missing an update to the task yaml for the Release_Candidate job to accompodate removal of the sles11 input. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Since gp_dbid and gp_contentid is stored in conf files on QE, its helpful to have validation to compare values between QD catalog table gp_segment_configuration and QE. This validation is performed using FTS. FTS message includes gp_dbid and gp_contentid values from catalog. QE validates the value while handling the FTS message and if finds inconsistency PANICS. This check is mostly targeted during development to catch missed handling of gp_dbid and gp_contentid values in config files. For future features like pg_upgrade and gpexpand which copy master directory and convert it to segment. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Ashwin Agrawal 提交于
Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Ashwin Agrawal 提交于
Currently, gp_dbid and gp_contentid is passed as command line arguments for starting QD and QE. Since, the values are stored in master's catalog table, to get the right values, must start the master first. Hence, hard-coded dbid=1 was used for starting the master in admin mode always. This worked fine till dbid was not used for anything on-disk. But given dbid is used for tablespace path in GPDB 6, startting the instance with wrong dbid, means inviting recovery time failues, data corruption or data loss situations. Dbid=1 will go wrong after failover to standby master as it has dbid != 1. This commit hence eliminate the need of passing the gp_dbid and gp_contentid on command line, instead while creating the instance the values are stored in conf files for the instance. This also helps to avoid passing gp_dbid as argument to pg_rewind, which needs to start target instance in single user mode to complete recovery before performing rewind operation. Plus, this eases during development to just use pg_ctl start and not require to correctly pass these values. - gp_contentid is stored in postgresql.conf file. - gp_dbid is stored in internal.auto.conf. - Introduce internal.auto.conf file created during initdb. internal.auto.conf is included from postgresql.conf file. - Separate file is chosen to write gp_dbid for ease handling during pg_rewind and pg_basebackup, as can exclude copying this file from primary to mirror, instead of trying to edit the contents of the same after copy during these operations. gp_contentid remains same for primary and mirror hence having it in postgresql.conf file makes senes. If gp_contentid is also stored in this new file internal.auto.conf then pg_basebackup needs to be passed contentid as well to write to this file. - pg_basebackup: write the gp_dbid after backup. Since, gp_dbid is unique for primary and mirror, pg_basebackup excludes copying internal.auto.conf file storing the gp_dbid. pg_basebackup explicit (over)writes the file with value passed as --target-gp-dbid. --target-gp-dbid due to this is mandatory argument to pg_basebackup now. - gpexpand: update gp_dbid and gp_contentid post directory copy. - pg_upgrade: retain all configuration files for segment. postgresql.auto.conf and internal.auto.conf are also internal configuration files which should be restored back after directory copy. Similar, change is required in gp_upgrade repo in restoreSegmentFiles() after copyMasterDirOverSegment(). - Update tests to avoid passing gp_dbid and gp_contentid. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Ashwin Agrawal 提交于
To create mirrors, pg_basebackup needs to be performed. pg_basebackup to correctly handle tablespaces needs dbid as argument. This requirement exist because dbid is used in tablespace path. dbid in master catalog to be in sync with what's used by mirror for tablespace, need to add mirror to catalog first. Get the dbid and pass the same to pg_basebackup for creating mirror. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
- 22 1月, 2019 3 次提交
-
-
由 Adam Lee 提交于
pg_upgrade doesn't like it, please revert this commit once the restriction is removed. ``` Checking for external tables used in partitioning fatal | Your installation contains partitioned tables with external | tables as partitions. These partitions need to be removed | from the partition hierarchy before the upgrade. A list of | external partitions to remove is in the file: | external_partitions.txt Failure, exiting ```
-
由 Adam Lee 提交于
We forgot to dump the namespace while processing external partitions, it would be a problem since upstream pg_dump decided not to dump the search_path, this commit fixes it.
-
由 Haozhou Wang 提交于
If both master and standby master are set in the same node, the gppkg utility will report error when uninstall a gppkg. This is because, gppkg utility assume master and standby master are in the different node, which is not be true in test environment. This patch fixed this issue, when master and standby master are in the same node, we skip to install/uninstall gppkg on standby master node.
-
- 21 1月, 2019 2 次提交
-
-
由 Shaoqi Bai 提交于
The code was added to tackle the case when FTS sends promote message, on mirror create the PROMOTE file and signal mirror to promote. But while mirror is still under promotion and not completed yet, FTS sends promote again, which creates the PROMOTE file again. Now, this PROMOTE file exist on promoted mirror which is acting as primary. So, if basebackup was taken from this primary to create mirror, it included PROMOTE file and auto promoted mirror on creation which is incorrect. Hence, via FTS to detect if this file exist delete PROMOTE file was added along with pg_basebackup excluding the copy of PROMOTE file. Now, given that background and upstream commit to always just delete the PROMOTE file on postmaster start, covers for even if PROMOTE file gets created after mirror promotion and gets copied over by pg_basebackup. On mirror startup no risk of auto-promotion. So, we can safely remove this code now. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Richard Guo 提交于
Function largest_child_relation() is used to find the largest child relation for an inherited/partitioned relation, recursively. Previously we passed a wrong rel as its param. This patch finds in root->simple_rel_array the right rel for largest_child_relation(). Also it replaces several rt_fetch with a search in root->simple_rte_array. This patch fixes #6599. Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
- 19 1月, 2019 8 次提交
-
-
由 Lisa Owen 提交于
* docs - reorg pxf content, add multi-server, objstore content * misc edits, SERVER not optional * add server, remove creds from examples * address comments from alexd * most edits requested by david * add Minio to table column name * edits from review with pxf team (start) * clear text credentials, reorg objstore cfg page * remove steps with XXX placeholder * add MapR to supported hadoop distro list * more objstore config updates * address objstore comments from alex * one parquet data type mapping table, misc edits * misc edits from david * add mapr hadoop config step, misc edits * fix formatting * clarify copying libs for MapR * fix pxf links on CREATE EXTERNAL TABLE page * misc edits * mapr paths may differ based on version in use * misc edits, use full topic name * update OSS book for pxf subnav restructure
-
由 David Kimura 提交于
This commit addresses a race condition where it was possible that during xlogstream the pg_xlog directory went missing. The race exists only with --foceoverwrite and --xlog stream. In stream mode pg_basebackup forks a process to populate pg_xlog directory with new transaction files and another process to receive and untar base directory contents. Force overwrite removes an existing pg_xlog directory before copying contents from tar file. It is problematic if untar process deletes xlog directory while stream process tries to write to it. In order to avoid this situation in forceoverwrite mode, the deletion of pg_xlog now happens before starting stream and untar processes. This enables untar process to skip deletion of pg_xlog. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Daniel Gustafsson 提交于
Use the CStringGetTextDatum() construct when generating the reloptions array in order to improve readability. This patch started out by trying to remove duplication in calculating the string length but turned into a refactoring of the Datum creation instead. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Running ANALYZE with the HLL computation produce a lot of LOG messages which are more geared towards troubleshooting than general purpose log files. Fold these under ANALYZE VERBOSE to avoid cluttering up logfiles on production systems unless explicitly asked for. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Bradford Boyle 提交于
- Added with-quicklz configure flag - Added quicklz gpcontrib directory with C wrapper functions and SQL installation file - Added simple quicklz functional tests - Added #undef HAVE_LIBQUICKLZ to pg_config.h.win32. This is to parallel the recent change in pg_config.h.in that adds quicklz. pg_config.h.win32 should be autogenerated, but isn't in practice. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Venkatesh Raghavan 提交于
For GPDB 6 Beta centos 6/7 need to be passing for the same commit to be a valid release candidate. Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Sambitesh Dash 提交于
-
由 Jacob Champion 提交于
The GPDB-specific constant PQPING_MIRROR_READY, which indicates that a mirror is ready for replication, was not handled in pg_isready. Additionally, the value we selected for PQPING_MIRROR_READY might at one point in the future conflict with upstream libpq, which would be a pain to untangle. Try to avoid that situation by increasing the value. Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 18 1月, 2019 6 次提交
-
-
由 Adam Berlin 提交于
-
由 David Kimura 提交于
There was a race condition where it was possible that fault was unexpectedly triggered by WAL sender object independent of pg_basebackup being run. We could fix it to be more deternimistic by incrementing wait for triggered count, but the test as a whole didn't seem to add much value. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 David Kimura 提交于
Prior to this commit, the test recreated the tmp_check_* directory for each running test. This would lead to loosing the datadir for the failing test if it wasn't the last one. This commit, creates a new directory specific to each test and cleans up artifacts of previous passing tests Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Abhijit Subramanya 提交于
Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Abhijit Subramanya 提交于
This commit sets the default value of the guc optimizer_penalize_broadcast_threshold to 100000. We have seen a lot of cases where a plan with broadcast was chosen due to underestimation of cardinality. In such cases a Redistribute motion would have been better. So this commit will penalize broadcast when the number of rows is greater than 100000 so that Redistribute is favored more in this case. We have tested the change on the perf pipeline and do not see any regression. Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Heikki Linnakangas 提交于
I looked up this issue in the old JIRA instance: > MPP-8014: bitmap indexes create entries in gp_distribution_policy > > postgres=# \d bar > Table "public.bar" > Column | Type | Modifiers > --------+---------+----------- > i | integer | > Distributed by: (i) > > postgres=# create index bitmap_idx on bar using bitmap(i); > CREATE INDEX > postgres=# select localoid::regclass, * from gp_distribution_policy; > localoid | localoid | attrnums > ----------------------------+----------+---------- > bar | 16398 | {1} > pg_bitmapindex.pg_bm_16415 | 16416 | > (2 rows) So the problem was that we created gp_distribution_policy entry for the auxiliary heap table of the bitmap index. We no longer do that, this bug was fixed 9 years ago. But the test we have in mpp8014 would not fail, even if the bug reappeared! Let's remove the test, as it's useless in its current form. It would be nice to have a proper test for that bug, but it doesn't seem very likely to re-appear any time soon, so it doesn't seem worth the effort. Fixes https://github.com/greenplum-db/gpdb/issues/6315
-
- 17 1月, 2019 3 次提交
-
-
由 Adam Berlin 提交于
-
由 Daniel Gustafsson 提交于
"remoteHost" is not a parameter to ReadPostmasterTempFile.remote(), fix by just passing the value. The ReadPostmasterTempFile __init__ does however have a remoteHost parameter, which is likely where the confusion occurred. This codepath was rarely executed, but when reached it would create the following error message: [ERROR]:-Failed to stop standby. Attempting forceful termination of standby process [CRITICAL]:-gpstop failed. (Reason='remote() got an unexpected keyword argument 'remoteHost'') exiting... Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Daniel Gustafsson 提交于
This removes a duplicate import and a few set, but never used, vars from the gpload.py code as well as the including_defaults token as it was clearly unused. Also fixes a few typos while in there, one of which is a user facing error message. Reviewed-by: NJacob Champion <pchampion@pivotal.io>
-