- 04 7月, 2018 6 次提交
-
-
由 Daniel Gustafsson 提交于
The backup list is either a leftover debugging artefact or its use was removed during the merge work and never made it into the rewritten commit history. Either way, it serves no purpose so remove from this hot code path. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit e0409357 moved to using default estimates rather than interrogating the QEs for relations which lack statistics. As an effect of this, the cdb_default_stats_used member was hardcoded to false and the warnings for missing statistics did never fire. Rather than resurrecting the warnings, this removes the code that attempts to figure out if the warnings at all apply since it seems quite expensive to run that in the hot path of every join query being planned. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Adam Lee 提交于
The map was missed by mistake, all AO loading actions need it.
-
由 Daniel Gustafsson 提交于
Commit 4483b7d3 remove spclocation from the tablespace catalog, but the \db command in psql wasn't updated to match the corresponding Greenplum version as it was backported prior to when it was introduced in upstream. This will eventually go away as we merge with PostgreSQL, but that's not an excuse for not fixing what is broken. Discussion: https://github.com/greenplum-db/gpdb/pull/5238Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
Not sure why setting of guc gp_vmem_protect_limit has specific value for darwin.
-
由 Todd Sedano 提交于
Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
- 03 7月, 2018 6 次提交
-
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Starting with this 8.4 commit 1d577f5e, backend checks for the existence of the directory and if not present creates the same. So, we can avoid creating the same in utilities.
-
由 Ashwin Agrawal 提交于
The AO implementation aligns with 8.4 and forward heap implementation, to write the data during recovery and not fail. Also, to note in case of AO the way seek it performed during replay, its not going to fail if file doesn't have that much data yet. As the way seek works irrespective of the length of file will seek to that offset from requested position and write the data (sure file will have hole in it for this case) . But it will not result in seek failure as such. We will write the data and if truncation has happened then will happen again during recovery.
-
由 Ashwin Agrawal 提交于
The lock level looks fine hence resolving the fixme's.
-
由 Ashwin Agrawal 提交于
There are many tests which flow through this code path specifically in alter_table.sql. Nothing exploded with the removal of the same and gpcheckcat flagged nothing means its fine to delete the same.
-
由 Hubert Zhang 提交于
-
- 02 7月, 2018 1 次提交
-
-
由 Jialun 提交于
- Introduce a new GUC gp_resource_group_bypass, when it is on, the query in this session will not be limited by resource group
-
- 30 6月, 2018 7 次提交
-
-
由 David Yozie 提交于
-
由 Ivan Leskin 提交于
* Extra docs for gp_external_enable_filter_pushdown Add extra documentation for 'gp_external_enable_filter_pushdown' and the pushdown feature in PXF extension. * Minor doc text fixes Minor documentation text fixes, proposed by @dyozie. * Clarify the pushdown support by PXF Add the following information: * List the PXF connectors that support pushdown; * State that GPDB PXF extension supports pushdown; * Add a list of conditions that need to be fulfilled for the pushdown feature to work when PXF protocol is used. * Correct the list of PXF connectors with pushdown * State that Hive and HBase PXF connectors support filter predicate pushdown; * Remove references to JDBC and Apache Ignite PXF connectors, as proposed by @dyozie (these are not officially supported by Greenplum).
-
由 kaknikhil 提交于
-
由 Ashwin Agrawal 提交于
Number of fsync buffers synced to disk varies based on how hint-bits gets updated and all. Like sometimes I see global table pg_tablespace buffer flushed and sometimes not depending on what tests were executed before this test.
-
由 Ashwin Agrawal 提交于
Greenplum added rd_isyscat to Relation structure. Only usage of the same is in markDirty() to decide if buffer should be marked dirty or not. The setting of rd_issyscat is based in checking if relation name starts with "pg_" then set it else not. Which anyways is very loose. Modified instead to just make check based on if oid < FirstNormalObjectId or to cover for pg_aoseg tables RelationGetNamespace(relation) == PG_AOSEGMENT_NAMESPACE. So, this allows us to remove the extra variable. This patch is not trying to change the intent of GUC `gp_disable_tuple_hints`. That's all together different discussion.
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
The issue happens because of constant folding in the testexpr of the SUBPLAN expression node. The testexpr may be reduced to a const and any PARAMs, previous used in the testexpr, disappear, However, the subplan still remains. This behavior is similar in upstream Postgres 10 and may be of performance consideration. Leaving that aside for now, the constant folding produces an elog(ERROR)s when the plan has subplans and no PARAMs are used. This check in `addRemoteExecParamsToParamList()` uses `context.params` which computes the used PARAMs in the plan and `nIntPrm = list_length(root->glob->paramlist`, which is the number of PARAMs declared/created. Given the ERROR messages generated, the above check makes no sense. Especially since it won’t even trip for the InitPlan bug (mentioned in the comments) as long as there is at least one PARAM in the query. This commit removes this check since it doesn't correctly capture the intent. In theory, it could be be replaced by one specifically aimed at InitPlans, that is, find all the params ids used by InitPlan and then make sure they are used in the plan. But we already do this and remove any unused initplans in `remove_unused_initplans()`. So I don’t see the point of adding that. Fixes #2839
-
- 29 6月, 2018 6 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Omer Arap 提交于
To merge stats in incremental analyze for root partition, we use leaf tables' statistics. In commit b28d0297, we fixed an issue where child attnum do not match with a root table's attnum for the same column. After we fixed that issue with a test, that test also exposed the bug in analyze code. This commit fixes the issue in analyze using the similar fix in b28d0297.
-
由 Lisa Oakley 提交于
This is related to the work we have done to fix the sles11 and windows compilation failures. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Omer Arap 提交于
Previously, we would use the root table's information to acquire stats from the `syscache` which return no result. The reason it does not return any result is because we query syscache using `inh` field which is set true for root table and false for the leaf tables. Another issue which is not evident is the possibility of mismatching `attnum`s for the root and leaf tables after running specific scenarios. When we delete a column and then split a partition, unchanged partitions and old partitions preserves the old attnums while newly created partitions have increasing attnums with no gaps. If we query syscache using the root's attnum for that column, we would be getting a wrong stats for that specific column. Passing root's `inh` hide the issue of having wrong stats. This commit fixes the issue by getting the attribute name using the root's attnume and use it to acquire correct attnum for the largest leaf partition.
-
由 Ashwin Agrawal 提交于
No need to have database scope analyze, only specific table needs to be analyzed for the test.
-
由 Ashwin Agrawal 提交于
GPDB only supports 1 replica currently. Need to adopt in FTS and all to support 1:n till then restrict max_wal_senders GUC to 1. Later when code can handle the same the max value of guc can be changed. Also, remove the setting of max_wal_senders in postmaster which was added earlier for dealing with filerep/walrep co-existence.
-
- 27 6月, 2018 14 次提交
-
-
由 Adam Lee 提交于
These three was listed in common options, ``` FILL MISSING FIELDS LOG ERRORS [SEGMENT REJECT LIMIT <replaceable class="parameter">count</replaceable> [ROWS | PERCENT] ] IGNORE EXTERNAL PARTITIONS ``` But, 1, they are not working with both FROM and TO 2, FILL MISSING FIELDS should be [FILL_MISSING_FIELDS true | false] in generic form, which is silly, the old syntax is better 3, SREH and IGNORE EXTERNAL PARTITIONS could not be specified as generic ones Also documents the missing NEWLINE option.
-
由 Adam Lee 提交于
Unloading doesn't need it, checking the distribution policy neither.
-
由 Trevor Yacovone 提交于
Also, remove dev generated pipeline and add prod generated pipeline Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
由 Lisa Oakley 提交于
Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
由 Trevor Yacovone 提交于
This is a common issue with sles11 - that it doesn't currently include support for > TLS v1.1. Many upstream endpoints over the last 6 months have started to enforce > TLS v1.1. We resolved this by separating the sync_tools call into a separate task, such that we could run this task from a centos docker image, prior to compiling on the correct OS. These changes were backported to other compile jobs. We are pushing this change to resolve the sles11 blocker, but we are still experiencing difficulty with windows. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io> Co-authored-by: NEd Espino <edespino@pivotal.io>
-
由 Ashwin Agrawal 提交于
Some functions if had different return type or argument compared to upstream, were modified with comment in pg_proc.h while few were moved completely to pg_proc.sql. This difference causes confusion while merging and having single consistent method for all would be better. So, with this commit now upstream functions would be defined in pg_proc.h irrespective if there definitions differ from upstream or not. Note: pg_proc.h is used for all upstream definitions and pg_proc.sql is used to auto-generate gpdb added functions in Greenplum.
-
由 Ashwin Agrawal 提交于
In markDirty() seems oversight in commit 8c8b5c39 to avoid calling MarkBufferDirtyHint() for temp tables. Previous patch used relation->rd_istemp before calling XLogSaveBufferForHint() in MarkBufferDirtyHint() that was unnecessary given it already checks for BM_PERMANENT. So, call now MarkBufferDirtyHint() unconditionally.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
gp_dispatch = false on utility command is correct, as cannot dispatch the SET command yet as not established the transaction yet on QEs. transaction_deferrable is only useful with serializable isolation level as per upstream docs, so added note to start dispatching the same when we support serializable isolation level.
-
由 Ashwin Agrawal 提交于
Currently gpdb wal rep code is mix of multiple versions, once we reach 9.3 get opportunity to pain get in sync with upstream version. This will be taken care of then till that time live with gpdb modified version of the CheckPromoteSignal().
-
由 Ashwin Agrawal 提交于
No reason to call `SyncRepWaitForLSN()` from walsender process itself. Some existed in past seems which performed the same, even if walsender for whatever reason needs to perform transaction shouldn't result in wrtitng anything. Replaced the if with assertion instead to catch any viaolations of the assumption.
-
由 Ashwin Agrawal 提交于
Removing the Greenplum specific guc `Debug_xlog_insert_print`. Instead use upstream guc `wal_debug` for the same. Also, remove some unneccessary modifications vs upstream.
-
由 Ashwin Agrawal 提交于
Upstream doesn't have it and not used anymore in Greenplum, so loose it.
-
由 Ashwin Agrawal 提交于
Now, that wal replication is enabled for QD and QE the code must be enabled.
-