- 10 7月, 2018 2 次提交
-
-
由 Daniel Gustafsson 提交于
Make sure to include all required header files to silence compilers that are picky about that.
-
由 Tom Lane 提交于
According to recent tests, this case now works fine, so there's no reason to reject it anymore. (Even if there are still some OpenBSD platforms in the wild where it doesn't work, removing the check won't break any case that worked before.) We can actually remove the entire test that discovers whether libpython is threaded, since without the OpenBSD case there's no need to know that at all. Per report from Davin Potts. Back-patch to all active branches. Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
- 09 7月, 2018 4 次提交
-
-
由 Daniel Gustafsson 提交于
Following the change in 8fcd3fdd to cost-based enable GUCs, failing to find a way to construct an N-way join should be an error rather than debug (as in upstream). Reported-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 阿福Chris 提交于
-
由 阿福Chris 提交于
Discussion: https://github.com/greenplum-db/gpdb/pull/5155Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Heikki Linnakangas 提交于
Instead of completely disabling the generation of Paths with disabled plan types, add a high penalty to their cost estimates, like in the upstream. This reduces our diff vs. upstream, making future merges more straightforward. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/Az2cDcqf73g/_tY6Yv1kBgAJCo-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io> Reviewed-by: NRichard Guo <riguo@pivotal.io>
-
- 07 7月, 2018 2 次提交
-
-
由 Jimmy Yih 提交于
As part of the Postgres 8.3 merge, all heap tables now automatically create an array type. The array type will usually be created with typname '_<heap_name>' since the automatically created composite type already takes the typname '<heap_name>' first. If typname '_<heap_name>' is taken, the logic will continue to prepend underscores until no collision (truncating the end if typname gets past NAMEDATALEN of 64). This might be an oversight in upstream Postgres since certain scenarios involving creating a large number of heap tables with similar names could result in a lot of typname collisions until no heap tables with similar names can be created. This is very noticable in Greenplum heap partition tables because Greenplum has logic to automatically name child partitions with similar names instead of having the user name each child partition. To prevent typname collision failures when creating a heap partition table with a large number of child partitions, we will now stop automatically creating the array type for child partitions. References: https://www.postgresql.org/message-id/flat/20070302234016.GF3665%40fetter.org https://github.com/postgres/postgres/commit/bc8036fc666a8f846b1d4b2f935af7edd90eb5aa
-
由 Chris Hajas 提交于
The pg_get_partition_template_def and pg_get_partition_def functions take accesss share locks, but do not release them until the end of the transaction. If a transaction is long-running, this can conflict with other user operations. Is it not necessary to hold the lock indefinitely as the lock is only needed for the duration of the function call. Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 06 7月, 2018 10 次提交
-
-
由 Jialun 提交于
If a segment exists in gp_segment_configuration but its ip address can not be resolved we will run into a runtime error on gang creation: ERROR: could not translate host name "segment-0a", port "40000" to address: Name or service not known (cdbutil.c:675) This happens even if segment-0a is a mirror and is marked as down. With this error queries can not be executed, gpstart and gpstop will also fail. One way to trigger the issue: - create a multiple segments cluster; - remove sdw1's dns entry from /etc/hosts on mdw; - kill postgres primary process on sdw1; FTS can detect this error and automatically switch to mirror, but queries can not be executed.
-
由 Mel Kiyama 提交于
* docs - update system catalog maintenance information. --Updated Admin. Guide and Best Practices for running REINDEX, VACUUM, and ANALYZE --Added note to REINDEX reference about running ANALYZE after REINDEX. * docs - edits for system catalog maintenance updates * docs - update recommendation for running vacuum and analyze. Update based on dev input.
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - add foreign data wrapper-related ref pages * remove CREATE SERVER example referencing default fdw * edits from david, and his -> their
-
由 Jimmy Yih 提交于
We currently exit VACUUM early when there is a concurrent operation on an AO relation. Instead of exiting early, go through the rest of the AO segment files to see if they have crossed threshold for compaction.
-
由 Jimmy Yih 提交于
TRUNCATE will rewrite the relation by creating a temporary table and swapping it with the real relation. For AO, this includes the auxiliary tables which is concerning for the AO relation's pg_aoseg table which holds information that a AO segment file is available for write or waiting to be compacted/dropped. Since we do not currently invalidate the AppendOnlyHash cache entry, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of TRUNCATE on AO relations.
-
由 Jimmy Yih 提交于
ALTER TABLE commands that are tagged as AT_SetDistributedBy require a gather motion and does its own variation of creating a temporary table for CTAS (basically bypassing the usual ATRewriteTable which actually does do AppendOnlyHash cache entry invalidation). Without the AppendOnlyHash cache entry invalidation, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of AT_SetDistributedBy ALTER TABLE cases.
-
由 Jimmy Yih 提交于
The schema is named differently from the one being used in the search_path so all the tables, views, functions, and etc. were incorrectly being created in the public schema.
-
由 Omer Arap 提交于
We had significant deduplication in hyperloglog extension and utility library that we use in the analyze related code. This commit removes the deduplication as well as significant amount dead code. It also fixes some compiler warnings and some coverity issues. This commit also puts the hyperloglog functions in a separate schema which is non-modifiable by non superusers. Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 Lisa Owen 提交于
-
- 04 7月, 2018 7 次提交
-
-
由 Daniel Gustafsson 提交于
Fixing some of the more obvious breaches of common style around the code I just read for another patchset. There is no logical changes introduced here, only rearrangement for clarity. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
The backup list is either a leftover debugging artefact or its use was removed during the merge work and never made it into the rewritten commit history. Either way, it serves no purpose so remove from this hot code path. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit e0409357 moved to using default estimates rather than interrogating the QEs for relations which lack statistics. As an effect of this, the cdb_default_stats_used member was hardcoded to false and the warnings for missing statistics did never fire. Rather than resurrecting the warnings, this removes the code that attempts to figure out if the warnings at all apply since it seems quite expensive to run that in the hot path of every join query being planned. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Adam Lee 提交于
The map was missed by mistake, all AO loading actions need it.
-
由 Daniel Gustafsson 提交于
Commit 4483b7d3 remove spclocation from the tablespace catalog, but the \db command in psql wasn't updated to match the corresponding Greenplum version as it was backported prior to when it was introduced in upstream. This will eventually go away as we merge with PostgreSQL, but that's not an excuse for not fixing what is broken. Discussion: https://github.com/greenplum-db/gpdb/pull/5238Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
Not sure why setting of guc gp_vmem_protect_limit has specific value for darwin.
-
由 Todd Sedano 提交于
Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
- 03 7月, 2018 6 次提交
-
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Starting with this 8.4 commit 1d577f5e, backend checks for the existence of the directory and if not present creates the same. So, we can avoid creating the same in utilities.
-
由 Ashwin Agrawal 提交于
The AO implementation aligns with 8.4 and forward heap implementation, to write the data during recovery and not fail. Also, to note in case of AO the way seek it performed during replay, its not going to fail if file doesn't have that much data yet. As the way seek works irrespective of the length of file will seek to that offset from requested position and write the data (sure file will have hole in it for this case) . But it will not result in seek failure as such. We will write the data and if truncation has happened then will happen again during recovery.
-
由 Ashwin Agrawal 提交于
The lock level looks fine hence resolving the fixme's.
-
由 Ashwin Agrawal 提交于
There are many tests which flow through this code path specifically in alter_table.sql. Nothing exploded with the removal of the same and gpcheckcat flagged nothing means its fine to delete the same.
-
由 Hubert Zhang 提交于
-
- 02 7月, 2018 1 次提交
-
-
由 Jialun 提交于
- Introduce a new GUC gp_resource_group_bypass, when it is on, the query in this session will not be limited by resource group
-
- 30 6月, 2018 7 次提交
-
-
由 David Yozie 提交于
-
由 Ivan Leskin 提交于
* Extra docs for gp_external_enable_filter_pushdown Add extra documentation for 'gp_external_enable_filter_pushdown' and the pushdown feature in PXF extension. * Minor doc text fixes Minor documentation text fixes, proposed by @dyozie. * Clarify the pushdown support by PXF Add the following information: * List the PXF connectors that support pushdown; * State that GPDB PXF extension supports pushdown; * Add a list of conditions that need to be fulfilled for the pushdown feature to work when PXF protocol is used. * Correct the list of PXF connectors with pushdown * State that Hive and HBase PXF connectors support filter predicate pushdown; * Remove references to JDBC and Apache Ignite PXF connectors, as proposed by @dyozie (these are not officially supported by Greenplum).
-
由 kaknikhil 提交于
-
由 Ashwin Agrawal 提交于
Number of fsync buffers synced to disk varies based on how hint-bits gets updated and all. Like sometimes I see global table pg_tablespace buffer flushed and sometimes not depending on what tests were executed before this test.
-
由 Ashwin Agrawal 提交于
Greenplum added rd_isyscat to Relation structure. Only usage of the same is in markDirty() to decide if buffer should be marked dirty or not. The setting of rd_issyscat is based in checking if relation name starts with "pg_" then set it else not. Which anyways is very loose. Modified instead to just make check based on if oid < FirstNormalObjectId or to cover for pg_aoseg tables RelationGetNamespace(relation) == PG_AOSEGMENT_NAMESPACE. So, this allows us to remove the extra variable. This patch is not trying to change the intent of GUC `gp_disable_tuple_hints`. That's all together different discussion.
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
The issue happens because of constant folding in the testexpr of the SUBPLAN expression node. The testexpr may be reduced to a const and any PARAMs, previous used in the testexpr, disappear, However, the subplan still remains. This behavior is similar in upstream Postgres 10 and may be of performance consideration. Leaving that aside for now, the constant folding produces an elog(ERROR)s when the plan has subplans and no PARAMs are used. This check in `addRemoteExecParamsToParamList()` uses `context.params` which computes the used PARAMs in the plan and `nIntPrm = list_length(root->glob->paramlist`, which is the number of PARAMs declared/created. Given the ERROR messages generated, the above check makes no sense. Especially since it won’t even trip for the InitPlan bug (mentioned in the comments) as long as there is at least one PARAM in the query. This commit removes this check since it doesn't correctly capture the intent. In theory, it could be be replaced by one specifically aimed at InitPlans, that is, find all the params ids used by InitPlan and then make sure they are used in the plan. But we already do this and remove any unused initplans in `remove_unused_initplans()`. So I don’t see the point of adding that. Fixes #2839
-
- 29 6月, 2018 1 次提交
-
-
由 Daniel Gustafsson 提交于
-