- 03 9月, 2017 6 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
GPDB's pg_dump only supports dumping from server versions based on PostgreSQL 8.2 and above, while PostgreSQL's pg_dump supports much older versions. In GPDB's version, many of the version checks to deal with older versions had been removed, but that caused a lot of diffs against upstream, because of changed indentation. To reduce our diff footprint, put back those versions checks. But actually trying to deal with very old server versions seems futile, so instead of resurrecting all the code, just put an error message into the branches where upstream e.g. constructs queries that work with older versions. The version checks should never actually fail, because we check the server version once at the beginning of pg_dump. But it's nice to have something in those branches, to document the fact that there was more code there in the upstream, and to keep the formatting the same as in upstream.
-
由 Heikki Linnakangas 提交于
Includes changing the --help message for --insert option slightly, to match upstream. In the upstream, this change was made already in version 8.2, but was missed in GPDB for some reason.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
A window frame where the "lead" and "trail" are the same, i.e something like "XXX() OVER (w BETWEEN 1 ROWS PRECEDING AND 1 ROWS PRECEDING", is pretty useless in practice, so it seems uninteresting to optimize for it. Better to keep the code simple.
-
由 Heikki Linnakangas 提交于
When extensions were introduced in PostgreSQL, a call to recordDependencyOnCurrentExtension() was added in the creation codepath of all the object types. However, external protocols is a GPDB-specific object type, and we missed making that change there. Fixes github issue #2942.
-
- 02 9月, 2017 22 次提交
-
-
由 Heikki Linnakangas 提交于
Backport to 5X_STABLE, in order to make backporting future fixes easier.
-
由 Heikki Linnakangas 提交于
I meant to remove this in previous commit already, but forgot to "git add".
-
由 Heikki Linnakangas 提交于
Commits 563c8c6b and 96e6f19d removed most of this dead infrastructure, but had to leave these in place to avoid catalog change. Now that catalog changes are OK again, complete the removal.
-
由 Heikki Linnakangas 提交于
This includes the changes from upstream commit 096a30b5, to silence compiler warnings. We had already silenced them by initializing the variables elsewhere, but let's stick to upstream code wherever possible.
-
由 Heikki Linnakangas 提交于
Only ParseFuncOrColumn needed the additional "isstrict" and "isordered" return values from func_get_detail. And even ParseFuncOrColumn only needed that information under certain circumstances, for error checks. Move the code to fetch that information out of func_get_detail, to separate little helper functions. In principle, fetching the "isstrict" and "isordered" flags separately is more expensive, because it now requires two syscache lookups rather than one. In practice, however, this is a win, because you only need to fetch the "isstrict" flag if a FILTER clause was given, and you only need the "isordered" flag if an OVER clause was given. Both of those clauses are rare, so in the common case this actually saves one syscache lookup. (None of this matters much in practice, though, because syscache lookups are pretty cheap anyway.)
-
由 Heikki Linnakangas 提交于
The upstream code isn't quite identical to ours, but it's the same functionality. Now it's at least in the same place.
-
由 Heikki Linnakangas 提交于
This makes constructing strings a lot simpler, and less scary. I changed many places in GPDB code to use the new psprintf() function, where it seemed to make most sense. A lot of code remains that could use it, but there's no urgency. I avoided changing upstream code to use it yet, even where it would make sense, to avoid introducing unnecessary merge conflict. The biggest changes are in cdbbackup.c, where the code to count the buffer sizes was most really complex. I also refactored the #ifdef USE_DDBOOST blocks so that there is less repetition between the USE_DDBOOST and !USE_DDBOOST blocks, that should make it easier to catch bugs at compilation time, that affect the !USE_DDBOOST case, when compiling with USE_DDBOOST, and vice versa. I also switched to using pstrdup instead of strdup() in a few places, to avoid memory leaks. (Although the way cdbbackup works, it would only get launched once per connection, so it didn't really matter in practice.)
-
由 Heikki Linnakangas 提交于
Seems better for the function to palloc the struct it returns, rather than have it fill in a struct provided by the caller. The thing that really caught my eye was this: ``` genericPair *genpair = (genericPair *) palloc0 (4 * sizeof(char *)); ``` That wrorks, because the genericPair struct consists of four "char *" fields. But it seems fragile, should use "sizeof(genericPair)". This commit fixes that, among the other kibitzing.
-
由 Jacob Champion 提交于
BufferGetLSNAtomic accepts local (negative) buffer values, and simply exits early if it sees one. But it does index into the BufferDescriptors array using this invalid negative value, which makes Coverity complain. We're taking the address of the invalid location and not using it in the local case, so this likely doesn't have any real ramifications. But I can't find a clear statement that &array[invalid_index] is actually well-defined across all platforms/compilers, and someone in the future might miss that bufHdr isn't always pointing to something sane, so it seems prudent to just fix this.
-
由 Ashwin Agrawal 提交于
This is majorly backport of Postgres commit. ------------------------------------------------ commit a507b869 Author: Robert Haas <rhaas@postgresql.org> Date: Wed Feb 8 15:45:30 2017 -0500 Add WAL consistency checking facility. When the new GUC wal_consistency_checking is set to a non-empty value, it triggers recording of additional full-page images, which are compared on the standby against the results of applying the WAL record (without regard to those full-page images). Allowable differences such as hints are masked out, and the resulting pages are compared; any difference results in a FATAL error on the standby. Kuntal Ghosh, based on earlier patches by Michael Paquier and Heikki Linnakangas. Extensively reviewed and revised by Michael Paquier and by me, with additional reviews and comments from Amit Kapila, Álvaro Herrera, Simon Riggs, and Peter Eisentraut. ------------------------------------------------ Its modified to work with current xlog format of Greenplum, which is different from Postgres code when this patch was committed. Main changes are to fit current backup block format in xlog records. Also, some masking routines differ compared to upstream as some of the masked flags would come once Greenplum catches up to latest versions of Postgres.
-
由 Jimmy Yih 提交于
Now that catalog updates are allowed, we should disable binary swap validation. Once the next catalog freeze comes, someone can flip this back.
-
由 Ashwin Agrawal 提交于
Script `gpsegwalrep.py` creates mirrors, so to create synchronous mirrors, must now set GUC `synchronous_standby_names`. GUC is removed during mirror removal.
-
由 Ashwin Agrawal 提交于
When walrep was ported to GPDB, it was modified to only block for sync replication if wal sender exist. In absence of wal sender, commits didn't block. This patch brings back the upstream way of blocking the commits based on GUC `synchronous_standby_names` for segments. Behavior for master is still unchanged, as current focus to have things rolling for segments, without impacting master replication.
-
由 Bhuvnesh Chaudhary 提交于
Queries in which there is a IN clause on top of a correlated subquery containing limit/offset clause, planner tries to identify if the IN clause can be converted to a join in convert_IN_to_join() and creates a RTE for the join if its possible and does not consider limit/offset clause while making a decision. However, later in pull_up_subqueries(), check enforced by is_simple_subquery causes the subquery containing limit/offset clauses to be not pulled up. This inconsistency causes a plan to be generated with a param, however, with no corresponding subplan. The patch fixes the issues by adding the relevant checks in convert_IN_to_join() to identify if the subquery is correlated and contains limit/offset clause, in such cases the sublink will not be converted to a join and a plan with subplan will be created.
-
由 Jemish Patel 提交于
Signed-off-by: NOmer Arap <oarap@pivotal.io> Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Dhanashree Kashid 提交于
While gathering metadata information about a partitioned relation, the relcache translator in GPDB constructs and sends partition constraints in `CMDPartConstraintGPDB` object. CMDPartConstraintGPDB contains following: 1. m_pdrgpulDefaultParts = List of partition levels that have default partitions 2. m_fUnbounded = indicate if the constraint is unbounded 3. m_pdxln = Part Constraint expression When you have more partitioning levels, the part constraint expression can grow large and we end up spending significant amount of time in translating this expression to DXL format. For instance, for the following query, relcache translator spends `18895.444000 ms` in fetching the metadata information on debug build: ``` DROP TABLE IF EXISTS date_parts; CREATE TABLE DATE_PARTS (id int, year int, month int, day int, region text) DISTRIBUTED BY (id) PARTITION BY RANGE (year) SUBPARTITION BY LIST (month) SUBPARTITION TEMPLATE ( SUBPARTITION Q1 VALUES (1, 2, 3), SUBPARTITION Q2 VALUES (4 ,5 ,6), SUBPARTITION Q3 VALUES (7, 8, 9), SUBPARTITION Q4 VALUES (10, 11, 12), DEFAULT SUBPARTITION other_months ) SUBPARTITION BY RANGE(day) SUBPARTITION TEMPLATE ( START (1) END (31) EVERY (10), DEFAULT SUBPARTITION other_days) ( START (2002) END (2012) EVERY (1), DEFAULT PARTITION outlying_years ); INSERT INTO date_parts SELECT i, EXTRACT(year FROM dt), EXTRACT(month FROM dt), EXTRACT(day FROM dt), NULL FROM (SELECT i, '2002-01-01'::DATE + i * INTERVAL '1 day' day AS dt FROM GENERATE_SERIES(1, 3650) AS i) AS t; EXPLAIN SELECT * FROM date_parts WHERE month BETWEEN 1 AND 4; ``` At ORCA end, however, we do not use this part constraints expression unless the relation has indices built on it. This is evident from CTranslatorDXLToExpr.cpp + L565 (`CTranslatorDXLToExpr::PexprLogicalGet`). Hence we do not need to send the PartConstraint expression from GPDB Recache Translator since: A. It will not be consumed if there are no indices B. The DXL translation is expensive. This commit fixes the Relcache Translator to send the Partition Constraint Expression to ORCA only in the cases listed below : IsPartTable Index DefaultParts ShouldSendPartConstraint NO - - - YES YES YES YES YES NO NO NO YES NO YES YES (but only default levels info) After fix the metadata fetch and translation time is reduced to `87.828000ms`. We also need changes in ORCA ParseHandler since we will be sending the part constraint expression in some cases only. [Ref #149769559] Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
由 Shivram Mani 提交于
-
由 Shivram Mani 提交于
* README updates for PXF setup instructions * PXF README updates for PXF setup instructions * test update * README updates for PXF setup instructions * PXF README updates for PXF setup instructions * note on curl depedancy * minor update
-
由 Lav Jain 提交于
-
由 David Yozie 提交于
* updating security guide landing page and main sections to use consistent short descriptions. * making security title consistent with others * updating best practices guide landing page and main sections to use consistent short descriptions. * updating utility guide main sections to use consistent short descriptions.
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - resource mgmt restructure, resgroup initial content * incorporate edits from david; some reorg * add memory allotment diagram * minor edit * make resource group experimental note a warning * incorporate edits from simon * update base color of resource group graphic * add missing ;
-
- 01 9月, 2017 12 次提交
-
-
由 Mel Kiyama 提交于
-
由 Daniel Gustafsson 提交于
This reverts commit 1596d323 since it broke cluster startup, clearly it needs more attention before being ready.
-
由 Daniel Gustafsson 提交于
This puts on on par with upstream PostgreSQL 9.1.20, released on 2016-02-11, for tzdata and the below referenced commit. The reason for not updating further is that there are tzdata format changes in 9.1.21, 2016a is the last release we can import without merging code changes as well. commit 6887d72d06a2f36508d5be9cca316088d4c60b26 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri Feb 5 10:59:09 2016 -0500 Update time zone data files to tzdata release 2016a. DST law changes in Cayman Islands, Metlakatla, Trans-Baikal Territory (Zabaykalsky Krai). Historical corrections for Pakistan.
-
由 Daniel Gustafsson 提交于
This bumps the copyright years to the appropriate years after not having been updated for some time. Also reformats existing code headers to match the upstream style to ensure consistency.
-
由 Daniel Gustafsson 提交于
The linenumbers are prone to change (and ideally shouldn't be there at all since we should use ereport instead), so scrub them from the test output with a matchsubs rule. The recent changes for copyright notices broke this since it added a line to the file.
-
由 Daniel Gustafsson 提交于
The missing errcode makes the ereport call include the line number of the invocation from the .c file, which not only isn't too useful but cause the tests to fail on adding/removing code from the file.
-
由 Peifeng Qiu 提交于
The original gpload test suites is intended to be run locally, so gpdb cluster and test must be on the same machine. For platform that doesn't have GPDB server available but have loader package, we need to run the test remotely.
-
由 Heikki Linnakangas 提交于
* Use ereport() with a proper error code, rather than elog(), so that you don't get the source file name and line number in the message, and the serious-looking backtrace in the log. * Remove the hint that advised "SET gp_enable_segment_copy_checking=off", when a row failed the check that it's being loaded to the correct segment. Ignoring the mismatch seems like very bad idea, because if your rows are in incorrect segments, all bets are off, and you'll likely get incorrect query results when you try to query the table.
-
由 Pengzhou Tang 提交于
-
由 Pengzhou Tang 提交于
When define index concurrently, DefineIndex() commits current transaction and dispatch a command to segments without resource group info, therefore, QE get a capability info with zero configured, the problem is groupReleaseMemQuota() use the empty capability info wrong and caused a floating-point exception. According to the code, it should use prevSharedInfo->caps instead
-
由 Lav Jain 提交于
* Make PXF part of the gpdb binary * Fix popd error
-
由 Mel Kiyama 提交于
* DOCS: SELECT - Add RECURSIVE keyword to WITH clause * docs: recursive CTE - add experimental designation. Add GUC gp_recursive_cte_prototype * docs: updated recursive CTE docs based on review comments.
-