- 06 7月, 2018 10 次提交
-
-
由 Jialun 提交于
If a segment exists in gp_segment_configuration but its ip address can not be resolved we will run into a runtime error on gang creation: ERROR: could not translate host name "segment-0a", port "40000" to address: Name or service not known (cdbutil.c:675) This happens even if segment-0a is a mirror and is marked as down. With this error queries can not be executed, gpstart and gpstop will also fail. One way to trigger the issue: - create a multiple segments cluster; - remove sdw1's dns entry from /etc/hosts on mdw; - kill postgres primary process on sdw1; FTS can detect this error and automatically switch to mirror, but queries can not be executed.
-
由 Mel Kiyama 提交于
* docs - update system catalog maintenance information. --Updated Admin. Guide and Best Practices for running REINDEX, VACUUM, and ANALYZE --Added note to REINDEX reference about running ANALYZE after REINDEX. * docs - edits for system catalog maintenance updates * docs - update recommendation for running vacuum and analyze. Update based on dev input.
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - add foreign data wrapper-related ref pages * remove CREATE SERVER example referencing default fdw * edits from david, and his -> their
-
由 Jimmy Yih 提交于
We currently exit VACUUM early when there is a concurrent operation on an AO relation. Instead of exiting early, go through the rest of the AO segment files to see if they have crossed threshold for compaction.
-
由 Jimmy Yih 提交于
TRUNCATE will rewrite the relation by creating a temporary table and swapping it with the real relation. For AO, this includes the auxiliary tables which is concerning for the AO relation's pg_aoseg table which holds information that a AO segment file is available for write or waiting to be compacted/dropped. Since we do not currently invalidate the AppendOnlyHash cache entry, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of TRUNCATE on AO relations.
-
由 Jimmy Yih 提交于
ALTER TABLE commands that are tagged as AT_SetDistributedBy require a gather motion and does its own variation of creating a temporary table for CTAS (basically bypassing the usual ATRewriteTable which actually does do AppendOnlyHash cache entry invalidation). Without the AppendOnlyHash cache entry invalidation, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of AT_SetDistributedBy ALTER TABLE cases.
-
由 Jimmy Yih 提交于
The schema is named differently from the one being used in the search_path so all the tables, views, functions, and etc. were incorrectly being created in the public schema.
-
由 Omer Arap 提交于
We had significant deduplication in hyperloglog extension and utility library that we use in the analyze related code. This commit removes the deduplication as well as significant amount dead code. It also fixes some compiler warnings and some coverity issues. This commit also puts the hyperloglog functions in a separate schema which is non-modifiable by non superusers. Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 Lisa Owen 提交于
-
- 04 7月, 2018 7 次提交
-
-
由 Daniel Gustafsson 提交于
Fixing some of the more obvious breaches of common style around the code I just read for another patchset. There is no logical changes introduced here, only rearrangement for clarity. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
The backup list is either a leftover debugging artefact or its use was removed during the merge work and never made it into the rewritten commit history. Either way, it serves no purpose so remove from this hot code path. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit e0409357 moved to using default estimates rather than interrogating the QEs for relations which lack statistics. As an effect of this, the cdb_default_stats_used member was hardcoded to false and the warnings for missing statistics did never fire. Rather than resurrecting the warnings, this removes the code that attempts to figure out if the warnings at all apply since it seems quite expensive to run that in the hot path of every join query being planned. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Adam Lee 提交于
The map was missed by mistake, all AO loading actions need it.
-
由 Daniel Gustafsson 提交于
Commit 4483b7d3 remove spclocation from the tablespace catalog, but the \db command in psql wasn't updated to match the corresponding Greenplum version as it was backported prior to when it was introduced in upstream. This will eventually go away as we merge with PostgreSQL, but that's not an excuse for not fixing what is broken. Discussion: https://github.com/greenplum-db/gpdb/pull/5238Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
Not sure why setting of guc gp_vmem_protect_limit has specific value for darwin.
-
由 Todd Sedano 提交于
Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
- 03 7月, 2018 6 次提交
-
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Starting with this 8.4 commit 1d577f5e, backend checks for the existence of the directory and if not present creates the same. So, we can avoid creating the same in utilities.
-
由 Ashwin Agrawal 提交于
The AO implementation aligns with 8.4 and forward heap implementation, to write the data during recovery and not fail. Also, to note in case of AO the way seek it performed during replay, its not going to fail if file doesn't have that much data yet. As the way seek works irrespective of the length of file will seek to that offset from requested position and write the data (sure file will have hole in it for this case) . But it will not result in seek failure as such. We will write the data and if truncation has happened then will happen again during recovery.
-
由 Ashwin Agrawal 提交于
The lock level looks fine hence resolving the fixme's.
-
由 Ashwin Agrawal 提交于
There are many tests which flow through this code path specifically in alter_table.sql. Nothing exploded with the removal of the same and gpcheckcat flagged nothing means its fine to delete the same.
-
由 Hubert Zhang 提交于
-
- 02 7月, 2018 1 次提交
-
-
由 Jialun 提交于
- Introduce a new GUC gp_resource_group_bypass, when it is on, the query in this session will not be limited by resource group
-
- 30 6月, 2018 7 次提交
-
-
由 David Yozie 提交于
-
由 Ivan Leskin 提交于
* Extra docs for gp_external_enable_filter_pushdown Add extra documentation for 'gp_external_enable_filter_pushdown' and the pushdown feature in PXF extension. * Minor doc text fixes Minor documentation text fixes, proposed by @dyozie. * Clarify the pushdown support by PXF Add the following information: * List the PXF connectors that support pushdown; * State that GPDB PXF extension supports pushdown; * Add a list of conditions that need to be fulfilled for the pushdown feature to work when PXF protocol is used. * Correct the list of PXF connectors with pushdown * State that Hive and HBase PXF connectors support filter predicate pushdown; * Remove references to JDBC and Apache Ignite PXF connectors, as proposed by @dyozie (these are not officially supported by Greenplum).
-
由 kaknikhil 提交于
-
由 Ashwin Agrawal 提交于
Number of fsync buffers synced to disk varies based on how hint-bits gets updated and all. Like sometimes I see global table pg_tablespace buffer flushed and sometimes not depending on what tests were executed before this test.
-
由 Ashwin Agrawal 提交于
Greenplum added rd_isyscat to Relation structure. Only usage of the same is in markDirty() to decide if buffer should be marked dirty or not. The setting of rd_issyscat is based in checking if relation name starts with "pg_" then set it else not. Which anyways is very loose. Modified instead to just make check based on if oid < FirstNormalObjectId or to cover for pg_aoseg tables RelationGetNamespace(relation) == PG_AOSEGMENT_NAMESPACE. So, this allows us to remove the extra variable. This patch is not trying to change the intent of GUC `gp_disable_tuple_hints`. That's all together different discussion.
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
The issue happens because of constant folding in the testexpr of the SUBPLAN expression node. The testexpr may be reduced to a const and any PARAMs, previous used in the testexpr, disappear, However, the subplan still remains. This behavior is similar in upstream Postgres 10 and may be of performance consideration. Leaving that aside for now, the constant folding produces an elog(ERROR)s when the plan has subplans and no PARAMs are used. This check in `addRemoteExecParamsToParamList()` uses `context.params` which computes the used PARAMs in the plan and `nIntPrm = list_length(root->glob->paramlist`, which is the number of PARAMs declared/created. Given the ERROR messages generated, the above check makes no sense. Especially since it won’t even trip for the InitPlan bug (mentioned in the comments) as long as there is at least one PARAM in the query. This commit removes this check since it doesn't correctly capture the intent. In theory, it could be be replaced by one specifically aimed at InitPlans, that is, find all the params ids used by InitPlan and then make sure they are used in the plan. But we already do this and remove any unused initplans in `remove_unused_initplans()`. So I don’t see the point of adding that. Fixes #2839
-
- 29 6月, 2018 6 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Omer Arap 提交于
To merge stats in incremental analyze for root partition, we use leaf tables' statistics. In commit b28d0297, we fixed an issue where child attnum do not match with a root table's attnum for the same column. After we fixed that issue with a test, that test also exposed the bug in analyze code. This commit fixes the issue in analyze using the similar fix in b28d0297.
-
由 Lisa Oakley 提交于
This is related to the work we have done to fix the sles11 and windows compilation failures. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Omer Arap 提交于
Previously, we would use the root table's information to acquire stats from the `syscache` which return no result. The reason it does not return any result is because we query syscache using `inh` field which is set true for root table and false for the leaf tables. Another issue which is not evident is the possibility of mismatching `attnum`s for the root and leaf tables after running specific scenarios. When we delete a column and then split a partition, unchanged partitions and old partitions preserves the old attnums while newly created partitions have increasing attnums with no gaps. If we query syscache using the root's attnum for that column, we would be getting a wrong stats for that specific column. Passing root's `inh` hide the issue of having wrong stats. This commit fixes the issue by getting the attribute name using the root's attnume and use it to acquire correct attnum for the largest leaf partition.
-
由 Ashwin Agrawal 提交于
No need to have database scope analyze, only specific table needs to be analyzed for the test.
-
由 Ashwin Agrawal 提交于
GPDB only supports 1 replica currently. Need to adopt in FTS and all to support 1:n till then restrict max_wal_senders GUC to 1. Later when code can handle the same the max value of guc can be changed. Also, remove the setting of max_wal_senders in postmaster which was added earlier for dealing with filerep/walrep co-existence.
-
- 27 6月, 2018 3 次提交
-
-
由 Adam Lee 提交于
These three was listed in common options, ``` FILL MISSING FIELDS LOG ERRORS [SEGMENT REJECT LIMIT <replaceable class="parameter">count</replaceable> [ROWS | PERCENT] ] IGNORE EXTERNAL PARTITIONS ``` But, 1, they are not working with both FROM and TO 2, FILL MISSING FIELDS should be [FILL_MISSING_FIELDS true | false] in generic form, which is silly, the old syntax is better 3, SREH and IGNORE EXTERNAL PARTITIONS could not be specified as generic ones Also documents the missing NEWLINE option.
-
由 Adam Lee 提交于
Unloading doesn't need it, checking the distribution policy neither.
-
由 Trevor Yacovone 提交于
Also, remove dev generated pipeline and add prod generated pipeline Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-