- 27 2月, 2020 6 次提交
-
-
由 Daniel Gustafsson 提交于
Backported from master 9aa9dc0apickReviewed-by: NMel Kiyama <mkiyama@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
This fixes multiple occurrences of duplicated words in sentences, like "the the" and "is is" etc. Backported from master 1d44a0c5Reviewed-by: NMel Kiyama <mkiyama@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Huiliang.liu 提交于
In the case of something like "select ... limit 1;", there may be attach request after session is closed. Then we should ignore the error code of session and just return a empty HTTP OK.
-
由 David Yozie 提交于
-
由 Sambitesh Dash 提交于
-
由 Tyler Ramer 提交于
The logic used in the initial commit is faulty and fragile. In the event that we want to force the master postmaster process to listen on a subset of addresses available on the system, it is most likely that we don't want to use the address(es) used by the interconnect. In the event of an external network and internal interconnect, the binding of "backend" listenning sockets to the "external" network would break the interconnect. Authored-by: NTyler Ramer <tramer@pivotal.io>
-
- 26 2月, 2020 6 次提交
-
-
由 Alexandra Wang 提交于
As requested from field, 3 superuser connections is not enough for gpdb when customers run superuser maintenance scripts. 10 is the same value as the resource group admin_group's concurrency default limit. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io> Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
由 Alexandra Wang 提交于
Previously, the fts connection is reserved as super user connection on both master and primaries, however, fts does not need a connection to master, hence remove the reservation on master. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io> Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
由 Alexandra Wang 提交于
The issue was reported from field where below function was being created. ``` set check_function_bodies = false; -- wait for gp_vmem_idle_resource_timeout time and then run CREATE FUNCTION public.f1() RETURNS smallint AS $$ SELECT f2() $$ LANGUAGE sql; ``` Ideally, we don't need to check function bodies on QE as QD already does it. But GPDB6 and below we can't perform that optimization because of github issue #9620. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 ppggff 提交于
-
由 ppggff 提交于
-
由 Lisa Owen 提交于
* docs - make pxf overview page more friendly * address comments from david * include db2 and msoft sql server in list of sql dbs supported
-
- 25 2月, 2020 2 次提交
-
-
由 Hubert Zhang 提交于
For query like 'create table t as select * from f()', if f() needs to do dispatch, then it must be run on QD. Currently, function could be specified to execute on master, but the above CTAS query will run the function on EntryDB. In fact, QD needs to do the CTAS work and cannot run function at all. To overcome this problem, we introduce a new location option for function: EXECUTE ON INITPLAN and run the f() on initplan before the CTAS work and store function results into tuplestore. Then when the real function running on EntryDB, it skip the function logic, but fetch tuples from the tuplestore instead. New plan is like: Redistribute Motion 1:3 (slice1) Hash Key: f.i -> Function Scan on f InitPlan 1 (returns $0) (slice2) -> Function Scan on f f_1 Note that this commit only has basic support for this feature, Only one function is allowed in CTAS query. (cherry picked from commit a21ff23b)
-
由 Paul Guo 提交于
fts probe trigger via query gp_request_fts_probe_scan() or internal function FtsNotifyProber() may wait ~60 additionally seconds (i.e. guc gp_fts_probe_interval) because FtsLoop()->WaitLatch() blocks until timeout. The root cause is that it is possible that the latch in below stack is waken up at first. WaitLatch() SyncRepWaitForLSN() RecordTransactionCommit() CommitTransaction() CommitTransactionCommand() updateConfiguration() processResponse() FtsWalRepMessageSegments() FtsLoop() I found this issue when testing test fts_unblock_primary. The test sometimes run for 60 more seconds than usual. Fix this by rechecking the probe request before FtsLoop()->WaitLatch(). Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Cherry-picked from 545f4466
-
- 22 2月, 2020 5 次提交
-
-
由 Ashwin Agrawal 提交于
This reverts commit 70b35f2e. Test plpython_returns is failing in CI. Will look into the failure and bring in the change again after fixing the same.
-
由 Ashwin Agrawal 提交于
It's best to vacuum pg_proc so that buffers are not marked dirty later for pg_proc when executing the newly created functions. Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
由 Ashwin Agrawal 提交于
Since QD performs the function bodies check and then only dispatches to QE, we can avoid performing the checks again on QE. Hence, setting `check_function_bodies=false;` for QE process. Without this GUC `check_function_bodies` required to be in sync between QD and QE since if `check_function_bodies=false` on QD, QE must also not perform the check. Disabling it always on QE eliminates the need. The issue was reported from field where below function was being created. ``` set check_function_bodies = false; -- wait for gp_vmem_idle_resource_timeout time and then run CREATE FUNCTION public.f1() RETURNS smallint AS $$ SELECT f2() $$ LANGUAGE sql; ``` Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
由 ppggff 提交于
Missing initialization of 'holdsStrongLockCount' may cause RemoveLocalLock() fail to operate fastpath array.
-
由 Mel Kiyama 提交于
Synch. docs with backup/restore 1.17
-
- 21 2月, 2020 1 次提交
-
-
由 Ashwin Agrawal 提交于
split_rows() scans tuples from T and route them to new parts (A, B) based on A's or B's constraints. If T has one or more dropped columns before its partition key, T's partition key would have a different attribute number from its new parts. In this case, the constraints choose a wrong column which can cause bad behaviors. To fix it, each tuple iteration should reconstruct the partition tuple slot and assign it to econtext before ExecQual calls. The reconstruction process can happen once or twice because we assume A, B might have two different tupdescs. One bad behavior, rows are split into wrong partitions. Reproduce: ```sql DROP TABLE IF EXISTS users_test; CREATE TABLE users_test ( id INT, dd TEXT, user_name VARCHAR(40), user_email VARCHAR(60), born_time TIMESTAMP, create_time TIMESTAMP ) DISTRIBUTED BY (id) PARTITION BY RANGE (create_time) ( PARTITION p2019 START ('2019-01-01'::TIMESTAMP) END ('2020-01-01'::TIMESTAMP), DEFAULT PARTITION extra ); /* Drop useless column dd for some reason */ ALTER TABLE users_test DROP COLUMN dd; /* Forgot/Failed to split out new partitions beforehand */ INSERT INTO users_test VALUES(1, 'A', 'A@abc.com', '1970-01-01', '2020-01-01 12:00:00'); INSERT INTO users_test VALUES(2, 'B', 'B@abc.com', '1980-01-01', '2020-01-02 18:00:00'); INSERT INTO users_test VALUES(3, 'C', 'C@abc.com', '1990-01-01', '2020-01-03 08:00:00'); /* New partition arrives late */ ALTER TABLE users_test SPLIT DEFAULT PARTITION START ('2020-01-01'::TIMESTAMP) END ('2021-01-01'::TIMESTAMP) INTO (PARTITION p2020, DEFAULT PARTITION); /* * - How many new users already in 2020? * - Wow, no one. */ SELECT count(1) FROM users_test_1_prt_p2020; ``` Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> (cherry picked from commit 101922f1) Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 19 2月, 2020 3 次提交
-
-
由 Huiliang.liu 提交于
Print errno and message if local_send() fails. Print detail information on session end
-
由 Haozhou Wang 提交于
1. When two gppkg packages have the same dependencies, gppkg utility will refuse to install the second gppkg package and throw an error. This patch fixes this issue and the second gppkg package can install successfully. 2. Fix install/uninstall issue if the master and standby master use the same node address. PS: This patch is backported from the master branch
-
由 Ashwin Agrawal 提交于
`ifa_addr` may be null for interface returned by getifaddrs(). Hence, checking for the same should be perfomed, else ifaddrs crashes. As side effect to this crashing, on my ubuntu laptop gpinitstandby always fails. Interface for which `getifaddrs()` returned null for me is: gpd0: flags=4240<POINTOPOINT,NOARP,MULTICAST> mtu 1500 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 (gdb) p *list $5 = {ifa_next = 0x5555555586a8, ifa_name = 0x555555558694 "gpd0", ifa_flags = 4240, ifa_addr = 0x0, ifa_netmask = 0x0, ifa_ifu = {ifu_broadaddr = 0x0, ifu_dstaddr = 0x0}, ifa_data = 0x555555558bb8} Reviewed-by: NJacob Champion <pchampion@pivotal.io> Reviewed-by: NMark Sliva <msliva@pivotal.io>
-
- 18 2月, 2020 1 次提交
-
- 17 2月, 2020 1 次提交
-
-
由 Weinan WANG 提交于
In upstream, it does not create a new pathkey in convert_subquery_pathkeys function. It also raises an issue in gpdb, so revert it. cherry-pick from: 0138eed4
-
- 15 2月, 2020 6 次提交
-
-
由 Ashwin Agrawal 提交于
-
由 Ashuka Xue 提交于
This commit adds a new optimizer cost model value to use for experimental features and developer testing. Setting `optimizer_cost_model=experimental` will use the new costing formula. Currently it is only used for a bitmap costing change. Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NAshuka Xue <axue@pivotal.io>
-
由 Ashwin Agrawal 提交于
Currently, ORCA is producing wrong result for the query used ``` SELECT 2,1 FROM ( SELECT generate_series(1, MAX_BUFFERED_TUPLES + 1) FROM (VALUES (5)) t(MAX_BUFFERED_TUPLES) ) t ; ?column? | ?column? | generate_series ----------+----------+----------------- 2 | 1 | 1 2 | 1 | 2 2 | 1 | 3 2 | 1 | 4 2 | 1 | 5 2 | 1 | 6 (6 rows) ``` Hence, avoid using that query and instead use simpler version to pass the test for ORCA enabled builds.
-
由 Melanie Plageman 提交于
Commit a8aa1c4a introduced a subtle bug: when a partition has a different distribution column number than the base table, we'd lazily construct a tuple table slot for that partition -- but in a (mistakenly) short-lived memory context. This is exposed when COPY handles more than MAX_BUFFERED_TUPLES lines, resetting the per-tuple memory context in between buffers. Resolves #9170 GitHub issue. Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Jesse Zhang 提交于
Commit a8aa1c4a introduced a bug when a partition has a different distribution policy (in terms of column numbers) from the base table: we would perform a partition lookup -- using a tuple table slot for the partition but (incorrectly) with a tuple descriptor for the base table. This would sometimes lead to an error of "no partition for partitioning key". Upon closer inspection, we didn't even need to look up the partition because the caller already knew! This commit fixes that, and simplifies the logic in GetDistributionPolicyForPartition to just take a resultRelInfo instead of the tuple values. Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Ashwin Agrawal 提交于
This reverts commit d90ac1a1.
-
- 14 2月, 2020 2 次提交
-
-
由 Paul Guo 提交于
Main changes are: - Merge isQDContext() and isQEContext() since the later is a bit buggy and there is no need to separate them in gpdb master now. - Remove an incorrect or unnecessary switch in notifyCommittedDtxTransaction(). - Rename some two phase variables or functions since they could be used in one phase also. - Remove some unnecessary Assert code (some are because previous code logic has judged; some are due to obvious reasons). - Rename DTX_STATE_PERFORMING_ONE_PHASE_COMMIT to DTX_STATE_NOTIFYING_ONE_PHASE_COMMIT to make code more align with 2PC code. - Remove useless state DTX_STATE_FORCED_COMMITTED. Reviewed-by: NHubert Zhang <hzhang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io> Cherry-picked from 83da7ddf
-
由 Shreedhar Hardikar 提交于
Includes ORCA-side refactors & renames in preparation for supporting opfamilies in ORCA.
-
- 13 2月, 2020 4 次提交
-
-
由 Paul Guo 提交于
Main changes are: - Merge isQDContext() and isQEContext() since the later is a bit buggy and there is no need to separate them in gpdb master now. - Remove an incorrect or unnecessary switch in notifyCommittedDtxTransaction(). - Rename some two phase variables or functions since they could be used in one phase also. - Remove some unnecessary Assert code (some are because previous code logic has judged; some are due to obvious reasons). - Rename DTX_STATE_PERFORMING_ONE_PHASE_COMMIT to DTX_STATE_NOTIFYING_ONE_PHASE_COMMIT to make code more align with 2PC code. - Remove useless state DTX_STATE_FORCED_COMMITTED. Reviewed-by: NHubert Zhang <hzhang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io> Cherry-picked from 83da7ddf
-
由 Asim R P 提交于
Incremental recovery and rebalance operations involve running pg_rewind against failed primaries. This patch changes gprecoverseg such that pg_rewind is invoked in parallel, using the WorkerPool interface, for each affected segment in the cluster. There is no reason to rewind segments one after the other. Fixes Github issue #9466 Reviewed by: Mark Sliva and Paul Guo (cherry picked from commit 43ad9d05)
-
由 Heikki Linnakangas 提交于
We were passing the parent memory context as NULL, which caused them to be allocated permanently. That was surely not intended.
- 12 2月, 2020 2 次提交
-
-
由 Jamie McAtamney 提交于
Previously, gpstart could not start the cluster if a standby master host was configured but currently down. In order to check whether the standby was supposed to be the acting master (and prevent the master from being started if that was the case), gpstart needed to access the standby host to retrieve the TimeLineID of the standby, and if the standby host was down the master would not start. This commit modifies gpstart to assume that the master host is the acting master if the standby is unreachable, so that it never gets into a state where neither the master nor the standby can be started. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io> (cherry picked from commit 29c759ab8c1f4179e46b51c91a808e76f6747075)
-
由 Kalen Krempely 提交于
Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
- 11 2月, 2020 1 次提交
-
-
由 Huiliang.liu 提交于
gpload uses gpversion.py to parse gpdb version. So that it can compatible with gpdb5 and gpdb6. Then we can only maintain one gpload version and some new features or bug fix could be used by gpdb5 customers. so we package gppylib.gpversion into gpdb clients tarball
-