- 13 9月, 2017 7 次提交
-
-
由 Shoaib Lari 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Ashwin Agrawal 提交于
Need flexibility to be able to control at what freqeuncy pipeline gets triggerd, one such usecase is same pipleine code runs with asserts on every commit and without asserts daily/weekly. So, adding a time resource via which can control the triggering. Plus, parameterize the trigger to control the things.
-
由 Karen Huddleston 提交于
Some backups with Data Domain contained an incorrect path in their report file. When checking whether the backup timestamp was in a pre or post content-id format, restore would fail since the path in the report file didn't match the expected pattern. Instead, we now check for the timestamp format in a more generalized way to account for this discrepancy. Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Mel Kiyama 提交于
-
由 Jimmy Yih 提交于
It was noticed in CentOS 6 that the entries in the char* list would become blank after freeing the raw string that is obtained from setting the GUC wal_consistency_checking. We should free this string only after we finish using it. Authors: Jimmy Yih and Taylor Vesely
-
由 Shoaib Lari 提交于
The MoveTransFilespaceLocally class in filespace.py used to verify that two directories are equivalent by doing a md5 on their contents. This commit changes behavior to use sha256 instead. Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 12 9月, 2017 21 次提交
-
-
由 Heikki Linnakangas 提交于
We no longer call contain_windowfunc() on raw parse trees, so we can revert this to the way it is in the upstream.
-
由 Heikki Linnakangas 提交于
This is the first merge commit, on our way to merging with PostgreSQL 8.4. Nothing too exciting in this batch. Most of the included changes we had in fact already backported earlier. But some things worth mentioning: * This bumps up the PostgreSQL version number to 8.4devel. This might require 3rd party tools to be fixed to cope with that. * Bumping the version number caused some \d queries in psql to fail, because they did a version check and tried to run queries that only work on PostgreSQL 8.4. This happens because the GPDB version of psql has been backported from 9.0. To make it work for now, I #ifdef'd out the version checks, and added GPDB_84_MERGE_FIXME comments to remind us to re-enable them again later, once we have merged the backend catalog changes that they need. * Same for a some version checks in pg_upgrade. * The build system now links the whole backend in one invocation, instead of using the per-directory intermediate SUBSYS.o files. We had already backported the Makefile changes for that, so this just flips the switch to start using it. This has been a joint effort between Heikki Linnakangas, Daniel Gustafsson, Jacob Champion and Tom Meyer.
-
由 Pengzhou Tang 提交于
The view gp_toolkit.gp_resgroup_status collect resgroup status information on both QD and QEs, also before and after a 300ms delay to measure the cpu usage. When there is concurrently resgroup drops it might fail due to missing resgroups. This is fixed by holding ExclusiveLock which blocks the drop operations. Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Pengzhou Tang 提交于
In some weak test agents, the shared semaphore is two small to support large max_connections like 150, so refine test cases to make it work with 40 max_connections.
-
由 Pengzhou Tang 提交于
Previously in ResGroupSlotRelease(), we mark the resWaiting flag to false out of the ResGroupLock, a race condition can happen as that the waitProc that meant to be wake up is canceled just before it's resWaiting is set to false and in the window resWaiting is set, the waitProc run another query and wait for a free slot again, then resWaiting is set and the waitProc will get another unexpected free slot. eg: rg1 has concurrency = 1 s1: BEGIN; s2: BEGIN; -- blocked s1: END; -- set breakpoint just before resWaiting is set in ResGroupSlotRelease() s2: cancel it; s3: BEGIN; -- will get a free slot. s2: BEGIN; -- run begin again and be blocked. s1: continue; s2: s2 will got unexpected free slot To avoid leak of resgroup slot, we set resWaiting under ResGroupLock so the waitProc have no window to re-fetch the slot before resWaiting is set. It's hard to add a regression test, adding breakpoint in ResGroupSlotRelease may block all other operations because it holds the ResGroupLock.
-
由 Xiaoran Wang 提交于
string overflow won't happen here because gpcheckcloud_newline was checked before, but still, strncpy() looks better. Signed-off-by: NXiaoran Wang <xiwang@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Shreedhar Hardikar 提交于
During NOT EXISTS sublink pullup, we create a one-time false filter when the sublink contains aggregates without checking for limitcount. However in situations where the sublink contains an aggregate with limit 0, we should not generate such filter as it produces incorrect results. Added regress test. Also, initialize all the members of IncrementVarSublevelsUp_context properly. Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Heikki Linnakangas 提交于
tqual.c is quite a long file, and there are plenty of legitimate MPP-specific changes in it compared to upstream. Move these functions to separate file, to reduce our diff footprint. This hopefully makes merging easier in the future. Run pgindent over the new file, and do some manual tidying up. Also, use CStringGetTextDatum() instead of the more complicated dance with DirectFunctionCall and textin(), to convert C strings into text Datums.
-
由 Heikki Linnakangas 提交于
In the upstream, two different structs are used to represent a window definition. WindowDef in the grammar, which is transformed into WindowClause during parse analysis. In GPDB, we've been using the same struct, WindowSpec, in both stages. Split it up, to match the upstream. The representation of the window frame, i.e. "ROWS/RANGE BETWEEN ..." was different between the upstream implementation and the GPDB one. We now use the upstream frameOptions+startOffset+endOffset representation in raw WindowDef parse node, but it's still converted to the WindowFrame representation for the later stages, so WindowClause still uses that. I will switch over the rest of the codebase to the upstream representation as a separate patch. Also, refactor WINDOW clause deparsing to be closer to upstream. One notable difference is that the old WindowSpec.winspec field corresponds to the winref field in WindowDef andWindowClause, except that the new 'winref' is 1-based, while the old field was 0-based. Another noteworthy thing is that this forbids specifying "OVER (w ROWS/RANGE BETWEEN ...", if the window "w" already specified a window frame, i.e. a different ROWS/RANGE BETWEEN. There was one such case in the regression suite, in window_views, and this updates the expected output of that to be an error.
-
由 David Yozie 提交于
-
由 Jemish Patel 提交于
-
由 Tom Meyer 提交于
Previously, gpdbrestore would analyze all schemas in the database during a schema-only restore. Now, only those schemas that were restored will be analyzed. Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
With commit 387c485d winstar and winagg fields were added in WindowRef Node, so this commit adds handling for them in ORCA Translator.
-
由 Heikki Linnakangas 提交于
I don't know what this was used for, but I don't see any code that would send these messages anymore.
-
由 Lav Jain 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
nMotionNodes tracks the number of Motion in a plan, and each plan node maintains nMotionNodes. Counting number of Motions in a plan node by traversing the tree and adding up nMotionNodes found in nested plans will give incorrect number of Motion nodes. So instead of using nMotionNodes, use a boolean flag to track if the subtree tree excluding the initplans contains a motion node
-
-
由 Marbin Tan 提交于
There are times where the flags/tags are not passed in the pipeline configuration but the jobs shows as green anwyays (false positive). This is due to the return code being 0 event though we don't do anything. Force an error when we don't specify flags/tags when running behave with the makefile.
-
由 Marbin Tan 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 11 9月, 2017 4 次提交
-
-
由 Xiaoran Wang 提交于
gpcheckcloud will append new line to files who miss it, so that the ending line won't concatenated with the first line of next file. Add an option gpcheckcloud_newline to support configuring new line, will report error with other new line than LF, CRLF or CR. Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Heikki Linnakangas 提交于
In commit d16710ca, I added an optimization to NOT IN subqueries, to remove any DISTINCT and ORDER BY. But on second thoughts, we should use the existing functions to do that. Also, the add_notin_subquery_rte() was not a good place to do it. It's a surprising side-effect for that function. Move the code to convert_IN_to_antijoin(), which is in line with the similar calls in convert_EXPR_to_join() and convert_IN_to_join().
-
由 Heikki Linnakangas 提交于
pg_config.h.in is generated by autoheader, but it had fallen out of date. Has everyone just been adding stuff manually to it, or how did this happen? In any case, let's run it now.
-
由 Heikki Linnakangas 提交于
It occurred to me while looking at PR #1460 that when there's a DISTINCT or an ORDER BY in a subselect NOT IN (...) subselect won't make any difference to the overall result, so we can strip it off and safe the effort. In its current form, PR #1460 would pessimize that case slightly more, by forcing the subselect's resul to be gathered to a singled node for deduplication or final ordering, while before, we would do only a local ordering / deduplication on each segment. But it is a waste of effort to do that even within each segment, and this PR gets rid of that.
-
- 09 9月, 2017 7 次提交
-
-
由 Heikki Linnakangas 提交于
Remove the assertion that WindowRef.winagg is set correctly in the executor, because it won't be if the plan was generated by ORCA.
-
由 Marbin Tan 提交于
-
由 Haisheng Yuan 提交于
Add required steps before building GPDB on centos. Fixes issue #2715, #2995 and #3163. [ci skip]
-
由 Xin Zhang 提交于
Originally, if the REORGANIZE option is used, and the source and target tables got same partition distribution policy, then the redistribute is skipped. This is due to the nature of planner optimizer will skip reshuffle if the source and target distribution policy match. However, if both source and target distribution policies are random, then planner will generate a redistribution motion to balance the tuples across the cluster. Leveraging that thought, we added new code path to temporarily set the source table's distribution policy to random while executing the CTAS query, and hence the optimizer can generate the proper query plan with `redistribute motion`. We restore the policy after creating the temporary table. Test cases are added to show case the scenario where a table is loaded with data inconsistent with its distribution policy when using COPY ... ON SEGMENT. The second case catches a regression when using SET WITH (REORGANIZE = TRUE) DISTRIBUTED RANDOMLY. Signed-off-by: NJacob Champion <pchampion@pivotal.io>
-
由 David Yozie 提交于
* reorganizing / promoting external tables topic * correcting graphics locations * more reorg and consolidation of topics * promoting shortdescs, consolidating web table topics * removing duplicate, manual chapter toc * promoting more shortdesc's, removing more manual tocs * changing title of hdfs section * promoting gphdfs section, removing oveverview section that mentions pxf * adding shortdesc to gphdfs topic
-
由 Heikki Linnakangas 提交于
This adds the 'winstar' field from the upstream. Also bring in the 'winagg' field while we're at it, although it's only used for an assertion in nodeWindow.c so far.
-
由 Heikki Linnakangas 提交于
-
- 08 9月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
FaultInjector_UpdateHashEntry() was using FaultInjector_InsertHashEntry(), which ends-up adding entry if not present without incrementing `faultInjectorShmem->faultInjectorSlots`. This causes inconsistency, plus also sometimes encounters "FailedAssertion(""!(faultInjectorShmem->faultInjectorSlots == 0)""," during fault inject reset, as goes negative. Fixing the same by using FaultInjector_LookupHashEntry() instead as that's what FaultInjector_UpdateHashEntry() needs. Scenario the Assertion was hitting: gpfaultinjector -f all -m async -y resume -r primary -H ALL gpfaultinjector -f all -m async -y reset -r primary -H ALL
-