- 24 7月, 2019 6 次提交
-
-
由 Daniel Gustafsson 提交于
The whole target is quite dubious and should be removed in favor of just using 'make -C gpMgmt install', but until I know how this file is at all used then let's at least clean out the worst offenders. Reviewed-by: NBradford D. Boyle <bboyle@pivotal.io>
-
由 Daniel Gustafsson 提交于
The greenplum_path target was referencing a program which no longer exists and is thus dead code, and the devel_failtinj is no longer defined at all. Reviewed-by: NBradford D. Boyle <bboyle@pivotal.io>
-
由 Daniel Gustafsson 提交于
The gpAux Makefile was checking for GCC version to ensure that GPDB is compiled with the minimum version required. This is a layering violation since we should be doing these checks in autoconf. Should the compiler be let through autoconf then we are good to go, further checking should not (need to) be performed. The version checked for is also quite ancient by now, the odds of it being the compiler on a system which can compile the rest of the codebase is slim to none. Reviewed-by: NBradford D. Boyle <bboyle@pivotal.io>
-
由 Heikki Linnakangas 提交于
It was made unused by commit 8eed4217Co-authored-by: NPengzhou Tang <ptang@pivotal.io>
-
由 Shoaib Lari 提交于
For commands called directly by the user, we provide the fix. Since Behave and unit tests are supposed to behave as a normal user, we do not provide the fix. The fix is supposed to be done by the commands themselves, and we want to test with an unmodified search_path in the actual tests. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Ashuka Xue 提交于
This commit corresponds with ORCA commit "Implement Full Merge Join in ORCA". It also bumps ORCA version to v3.59.0. This commit includes the following changes to support merge join in ORCA: 1. Update optimizer_expand_fulljoin guc to use traceflags instead of disabling the transform. 2. Translator changes for Merge Join. 3. Add IsOpMergeJoinable() and GetMergeJoinOpFamilies() wrappers. 4. Introduces the guc optimizer_enable_mergejoin.
-
- 23 7月, 2019 8 次提交
-
-
由 Lisa Owen 提交于
-
由 Adam Berlin 提交于
This allows a user to specify regress options to consistently make isolation2 tests for pg_basebackup_with_tablespaces pass. Currently, the tests fail if the user's source directory creates a tablespace location directory path that is longer than the 100 character limit for pg_basebackup to add a tablespace location directory to the backup.
-
由 Adam Berlin 提交于
-
由 Zhenghua Lyu 提交于
This commit refactor the function `cdbpath_motion_for_join` to make it clear and generate better plan for some cases. In distributed computing system, to gather distributed data into a singleQE should always be the last choice. Previous code for general and segmentgeneral, when they are not ok_to_replicate, will try to gather other locus to singleQD. This commit improves this by firstly trying to add redistributed motion. The logic for the join result's locus (outer's locus is general): 1. if outer is ok to replicated, then result's locus is the same as inner's locus 2. if outer is not ok to replicated (like left join or wts cases) 2.1 if inner's locus is hashed or hashOJ, we try to redistribute outer as the inner, if fails, make inner singleQE 2.2 if inner's locus is strewn, we try to redistribute outer and inner, if fails, make inner singleQE 2.3 just return the inner's locus, no motion is needed The logic for the join results' locus (outer's locus is segmentgenral): - if both are SegmentGeneral: 1. if both locus are equal, no motion needed, simply return 2. For update cases. If resultrelation is SegmentGeneral, the update must execute on each segment of the resultrelation, if resultrelation's numsegments is larger, the only solution is to broadcast other 3. no motion is needed, change both numsegments to common - if only one of them is SegmentGeneral: 1. consider update case, if resultrelation is SegmentGeneral, the only solution is to broadcast the other 2. if other's locus is singleQE or entry, make SegmentGeneral to other's locus 3. the remaining possibility of other's locus is partitioned 3.1 if SegmentGeneral is not ok_to_replicate, try to add redistribute motion, if fails gather each to singleQE 3.2 if SegmentGeneral's numsegments is larger, just return other's locus 3.3 try to add redistribute motion, if fails, gather each to singleQE
-
由 Zhenghua Lyu 提交于
Locus type Replicated can only be generated by join operation. And in the function cdbpathlocus_join there is a rule: `<any locus type> join <Replicated> => any locus type` Proof by contradiction, it shows that when code arrives here, it is impossible that any of the two input paths' locus is Replicated. So we add two asserts here.
-
由 Adam Lee 提交于
It was disabled by accident several months ago while implementing `COPY (query) TO ON SEGMENT`, re-enable it. ``` commit bad6cebc Author: Jinbao Chen <jinchen@pivotal.io> Date: Tue Nov 13 12:37:13 2018 +0800 Support 'copy (select statement) to file on segment' (#6077) ``` WARNING: there are no safety protections on utility mode, it's not recommended except disaster recovery situation. Co-authored-by: NWeinan WANG <wewang@pivotal.io>
-
由 David Krieger 提交于
The recent sysctl changes(42930ed1) modified the ccp nodes. Somehow, this causes memory issues on our ccp nodes for Behave. There was a recent, similar modification for gpexpand(6f494638).
-
由 Weinan WANG 提交于
Resource group believe memory access speed always faster than disk, and it adds hashagg executor node spill mechanism into its memory management. If the hash table size overwhelms `max_mem`, in resource group model, the hash table does not spill and fan out data. Resource group wants to grant more memory for the hash table. However, this strategy impact hash collision rate, so that some performance regression in some OLAP query. We rid resource group guc when hashagg evaluate if it needs to spill. Co-authored-by: NAdam Li <ali@pivotal.io>
-
- 22 7月, 2019 8 次提交
-
-
由 Adam Lee 提交于
MemoryAccounting_RequestQuotaIncrease() returns a number in bytes, but here expects kB.
-
由 tubocurarine 提交于
When building file `_pg.so` on MacOS platform, distutil will invoke clang compiler with arguments `-arch x86_64 -arch i386`. But type `int128` is not available for i386 architecture, thus following error occurs: ``` In file included from pgmodule.c:32: In file included from include/postgres.h:47: include/c.h:427:9: error: __int128 is not supported on this target typedef PG_INT128_TYPE int128 ^ include/pg_config.h:838:24: note: expanded from macro 'PG_INT128_TYPE' ^ In file included from pgmodule.c:32: In file included from include/postgres.h:47: include/c.h:433:18: error: __int128 is not supported on this target ``` By adding `['-arch', 'x86_64']` into `extra_compile_args`, distutil will remove `-arch i386` from compiler arguments, thus fixes compile error.
-
由 Adam Lee 提交于
scan->raw_buf_done was used for custom external table only, refactor to remove the MERGE_FIXME. cstate->raw_buf_len is safe to use since we operate pstate->raw_buf directly in this case.
-
由 Adam Lee 提交于
About the `isjoininner`, I searched the history commit in merge branch, it was removed by "e2fa76d8 - Use parameterized paths to generate inner indexscans more flexibly" on upstream from 9.2, that MERGE_FIXME was there because at that time functions which rely on `isjoininner` refused to compile.
-
由 Adam Lee 提交于
If there are more than INT_MAX rejected rows, this will overflow. That is possible at least if you specify the segment reject limit as a percentage. Still keep the SEGMENT REJECT LIMIT value as int, expanding that will break lots of things like catalog but benefit too little.
-
由 Adam Lee 提交于
Now the processed and rejected counting are in the NextCopyFrom() only, which reads next tuple from file, makes much more sense.
-
由 Adam Lee 提交于
For future usage, and to remove a MERGE_FIXME.
-
由 Ning Yu 提交于
For example: In the same session, query 1 has 3 slices and it creates gang 1, gang 2 and gang 3. query 2 has 2 slices, we hope it reuses gang 1 and gang 2 instead of other cases like gang 3 and gang 2. In this way, the two queries can have the same send-receive port pair. It's useful in platform like Azure. Because Azure limits the number of different send-receive port pairs (AKA flow) in a certain time period. Co-authored-by: NHubert Zhang <hzhang@pivotal.io> Co-authored-by: NPaul Guo <pguo@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io>
-
- 20 7月, 2019 2 次提交
-
-
由 Chuck Litzell 提交于
* docs - bring the pg_auth kerberos docs up-to-date with postgres * Edits from david * Review comments - codeph on trust, ident, password auth method names
-
由 David Yozie 提交于
-
- 19 7月, 2019 1 次提交
-
-
由 Daniel Gustafsson 提交于
The correct name of the gpperfmon installation tool is gpperfmon_install and the GUC for enabling it is gp_enable_gpperfmon.
-
- 18 7月, 2019 12 次提交
-
-
由 Daniel Gustafsson 提交于
Commit 3168a627 removed support for ignoring table header whitespace differences in test output, but the patch was a few bricks shy of a load. There were enough leftover bits that the option could be invoked, but without it actually working. This removes the leftovers. Looking at this it became clear that we had a whitespace ignore which was dead code, as it couldn't be triggered from the outside. Rather than trying to revive more cruft in atmsort, this removes the code since we clearly aren't using it. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
- also, wait for primaries to recover after panics - also, checkpoint at the end of each test to set redo point to not leak into next text
-
由 Hubert Zhang 提交于
One phase commit message change to `Distributed Commit (one-phase)` We need to fix the new case introduced by commit #6f9368 in direct disptach answer file.
-
由 Wang Hao 提交于
The goal of query_info_hook_test is to ensure query_info_collect_hook are placed in porper location for emitting query execution metrics. This test was flaky due to uncertain order of calling between QD and QEs when the interconnect in TCP mode. This fix simply silent all QEs from emitting messages. This is acceptable from the scope of this test because we just want to make sure hooks are called at correct timing for each backend. It should not be disturbed by query dispatching between QD and QEs.
-
由 Hubert Zhang 提交于
Prepare statement will bind parameters for each execution. It needs to decide to use a cached generic plan without params or a custom plan with params. In past, GPDB use plan cost plus re-plan cost to choose generic and custom plan. But generic plan does not contain params which leads to it could not generate direct dispatch plan compared with custom plan. For non direct dispatch plan it will introduce unneccessary QEs, which still need to go through volcano model, do two phase commit and write prepare xlog. So the cost of failed to generate direct dispatch plan would be higher in some case than the re-plan cost which makes custom plan runs faster than generic plan even if it needs to re-plan for every execute. Note that non direct dispatch cost is not considered in planner yet. Planner treats direct dispatch as an optimization and always enable it when possible. But for prepare statement, the case is that for generic plan it could not generate direct dispatch plan at all. But we need to consider this cost here, as a result, we introduce non direct dispatch cost into total cost only for cached plans. Co-authored-by: NNing Yu <nyu@pivotal.io>
-
由 Chris Hajas 提交于
3.58.0 corresponds to commit "Only create PropConstraint hashmap if necessary" 3.58.1 corresponds to commit "Fix stack-use-after-scope for `CCacheHashtableAccessor` instantiation" Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 dyozie 提交于
-
由 dyozie 提交于
-
由 Daniel Gustafsson 提交于
Happened to stumble over a commit by Asim that didn't seem to use the usual name, and sure enough. Also add a few others that I had lying around awaiting more to make it worth committing.
-
由 Daniel Gustafsson 提交于
The list of files to clean in test/regress contained references to files no longer present. pg_class32 was an intermediate file in a test for upgrades from Greenplum 3.2 to 3.3/3.4; cppudf.sql was added in an Orca testsuite commit which seems to have never used that file at all; gmon.out is an output file generated by gperf and all gperf invocations have been removed from tests. Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
- 17 7月, 2019 3 次提交
-
-
由 Pengzhou Tang 提交于
Commit bfd1f46c used the wrong time unit (expect Ms, passed with Us) in BackoffSweeper backend which makes it cannot re-calculate the CPU shares in time and the normal backends will sleep more CPU ticks than before in CHECK_FOR_INTERRUPTS and cause a performance downgrade.
-
由 Asim R P 提交于
It takes non-zero amount of time after a command is dispatched from a client until it appears in pg_stat_activity. The test must wait before validating anything based on pg_stat_activity. The wait logic was already added for one instance of such validation. This patch addres the wait logic for the remaining instance of validation. Also found a way to avoid creating one table, while at it. Reviewed-by: Shaoqi Bai and Adam Berlin
-
由 Hans Zeller 提交于
This tool looks at EXPLAIN plans and recognizes the line with the optimizer version. Recently, we added the string "(GPORCA)" to the optimizer name. The fix is to add parentheses to the characters we ignore in this line.
-