- 13 8月, 2020 2 次提交
-
-
由 Pengzhou Tang 提交于
-
由 Divyesh Vanjare 提交于
For a table partitioned by timestamp column, a query such as SELECT * FROM my_table WHERE ts::date == '2020-05-10' should only scan on a few partitions. ORCA previously supported only implicit casts for partition selection. This commit, extends ORCA to also support a subset of lossy (assignment) casts that are order-preserving (increasing) functions. This will improve ORCA's ability to partition elimination to produce faster plans. To ensure correctness, the additional supported functions are captured in an allow-list in gpdb::IsFuncAllowedForPartitionSelection(), which includes some in-built lossy casts such as ts::date, float::int etc. Details: - For list partitions, we compare our predicate with each distinct value in the list to determine if the partition has to be selected/eliminated. Hence, none of the operators need to be changed for list partition selection - For range partition selection, we check bounds of each partition and compare it with the predicates to determine if partition has to be selected/eliminated. A partition such as [1, 2) shouldn't be selected for float = 2.0, but should be selected for float::int = 2. We change the logic for handling equality predicates differently when lossy casts are present (ub: upper bound, lb: lower bound) if (lossy cast on partition col): (lb::int <= 2) and (ub::int >= 2) else: ((lb <= 2 and inclusive lb) or (lb < 2)) and ((ub >= 2 and inclusive ub ) or (ub > 2)) - CMDFunctionGPDB now captures whether or not a cast is a lossy cast supported by ORCA for partition selection. This is then used in Expr2DXL translation to identify how partitions should be selected.
-
- 12 8月, 2020 2 次提交
-
-
由 Heikki Linnakangas 提交于
ic_proxy_backend.h includes libuv's uv.h header, and ic_proxy_backend.h was being included in ic_tcp.c, even when compiling with --disable-ic-proxy.
-
由 Hubert Zhang 提交于
Previously, when backends connect to a proxy, we need to setup domain socket pipe and send HELLO message(recv ack message) in a blocking and non-parallel way. This makes ICPROXY hard to introduce check_for_interrupt during backend registeration. By utilizing libuv loop, we could register backend in paralle. Note that this is one of the step to replace all the ic_tcp backend logic reused by ic_proxy currently. In future, we should use libuv to replace all the backend logic, from registeration to send/recv data. Co-authored-by: NNing Yu <nyu@pivotal.io>
-
- 10 8月, 2020 2 次提交
-
-
由 ppggff 提交于
When the GUC parameter resource_cleanup_gangs_on_wait is on, the backend will clean up idle gangs before waiting for the resource queue lock. The cleanup operation involves network IO, so it takes a while. In the current code, the cleanup operation still holds a partition lock that would normally only be held for a short period of time, which will prevent normal lock table operations by other backends. The cleanup operation should be moved to after releasing the partition lock and before the backend starts waiting.
-
由 Ning Yu 提交于
A typical mistake on allocating typed memory is as below: int64 *ptr = malloc(sizeof(int32)); To prevent this, now we make ic_proxy_new() a typed allocator, it always return a pointer of the specified type, for example: int64 *p1 = ic_proxy_new(int64); /* good */ int64 *p2 = ic_proxy_new(int32); /* bad, gcc will raise a warning */ Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
- 08 8月, 2020 4 次提交
-
-
由 Mel Kiyama 提交于
* docs - support proxies for GPDB interconnect -New topic in Admin. Guide. -New GUC gp_interconnect_proxy_addresses -Updated GUC gp_interconnect_type - new value PROXY Also added note to gpexpand documents - do no use proxy during expand. * docs - review comment updates * docs - review comment updates. -update IC proxy example function -other minor edits. * docs - add not about running ic-proxy configuration function.
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
-
- 07 8月, 2020 5 次提交
-
-
由 Heikki Linnakangas 提交于
SET DISTRIBUTED BY and EXTEND TABLE subcommands worked differently from all other ALTER TABLE subcommands in who's responsible for closing the relcache reference. In all other subcommands, ATRewriteCatalogs opens and closes the 'rel', but for these two commands, the ATExec*() function closed it. I don't see any good reason for that. There were very old comments about forcing the relcache entry to be forgotten, but that explanation doesn't make sense to me, and everything seems to work without the early closing. Maybe it was needed a long time ago, but the code has changed a lot since it was written. Simplify, by closing the relation in ATRewriteCatalogs(), like with all other ALTER TABLE subcommands. Reviewed-by: NAsim R P <pasim@vmware.com>
-
由 Ning Yu 提交于
The copydml test creates both BEFORE and AFTER triggers on a table and checks the execution order of the client output. The original output order is "BEFORE -> RESULT -> AFTER", this is the order produced in ic-tcp mode; in ic-udpifc mode the order has a chance to become "RESULT -> BEFORE -> AFTER", it is not determined. There are 2 variants of the answer files for these 2 orders, so the test can pass on any order. In ic-proxy mode, the order can also be "BEFORE -> AFTER -> RESULT", so we also need a 3rd variant of the answer file for this order. By providing this we could pass the test in ic-proxy mode, and we could re-enable the copydml test, it was previously disabled on ic-proxy pipeline jobs. Reviewed-by: NAsim R P <pasim@vmware.com> Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
由 Hao Wu 提交于
walreceiver is quite sensitive to any WAL write. After create a table and insert some tuples, it doesn't run a vacuum. There may be other cases that cause some WAL traffic. One of the WAL records is relevant to the hot-standby to record RUNNING_XACTS. The last test_receive() only tests there should be no WAL traffic after receiving the WAL records to the current xlog location. It still has a gap that a new WAL record transmitted to the walreceiver. It's not meaningful to run test_receive(), since the function test_receive_and_verify() has verified that all the WAL traffic to the latest xlog location is received. So, removed test_receive(). Reviewed-by: NPaul Guo <paulguo@gmail.com> Reviewed-by: N(Jerome)Junfeng Yang <jeyang@pivotal.io>
-
由 Abhijit Subramanya 提交于
gpsd was failing because the connectionString that we passed to pgdb.connect had the parameters in the wrong order. It started failing after upgrading to a higher version of PyGreSQL. So use a dictionary instead in order to avoid sending in the parameters incorrectly. Co-authored-by: NAshwin Agrawal <aashwin@vmware.com>
-
由 Lisa Owen 提交于
-
- 06 8月, 2020 4 次提交
-
-
由 Paul Guo 提交于
Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NHao Wu <gfphoenix78@gmail.com>
-
由 Paul Guo 提交于
We've seen a panic case on gpdb 6 with stack as below, 3 markDirty (isXmin=0 '\000', tuple=0x7effe221b3c0, relation=0x0, buffer=16058) at tqual.c:105 4 SetHintBits (xid=<optimized out>, infomask=1024, rel=0x0, buffer=16058, tuple=0x7effe221b3c0) at tqual.c:199 5 HeapTupleSatisfiesMVCC (relation=0x0, htup=<optimized out>, snapshot=0x15f0dc0 <CatalogSnapshotData>, buffer=16058) at tqual.c:1200 6 0x00000000007080a8 in systable_recheck_tuple (sysscan=sysscan@entry=0x2e85940, tup=tup@entry=0x2e859e0) at genam.c:462 7 0x000000000078753b in findDependentObjects (object=0x2e856e0, flags=<optimized out>, stack=0x0, targetObjects=0x2e85b40, pendingObjects=0x2e856b0, depRel=0x7fff2608adc8) at dependency.c:793 8 0x00000000007883c7 in performMultipleDeletions (objects=objects@entry=0x2e856b0, behavior=DROP_RESTRICT, flags=flags@entry=0) at dependency.c:363 9 0x0000000000870b61 in RemoveRelations (drop=drop@entry=0x2e85000) at tablecmds.c:1313 10 0x0000000000a85e48 in ExecDropStmt (stmt=stmt@entry=0x2e85000, isTopLevel=isTopLevel@entry=0 '\000') at utility.c:1765 11 0x0000000000a87d03 in ProcessUtilitySlow (parsetree=parsetree@entry=0x2e85000, The reason is that we pass a NULL relation to the visibility check code, which might use the relation variable to determine if hint bit should be set or not. Let's pass the correct relation variable even it might not be used finally. I'm not able to reproduce the issue locally so I can not provide a test case but that is surely a potential issue. Reviewed-by: NAshwin Agrawal <aashwin@vmware.com>
-
由 Zhenghua Lyu 提交于
When update or delete statement errors out because of the CTID is not belong to the local segment, we should also print out the CTID of the tuple so that it will be much easier to locate the wrong- distributed data via: `select * from t where gp_segment_id = xxx and ctid='(aaa,bbb)'`.
-
由 David Yozie 提交于
-
- 05 8月, 2020 2 次提交
- 04 8月, 2020 2 次提交
-
-
由 Ning Yu 提交于
Fixed the bug that the SIGHUP handler was installed for SIGINT by mistake, so the ic-proxy bgworkers would die on SIGHUP. By correcting the signal name, now we could let the ic-proxy bgworkers reload the postgresql.conf by executing "gpstop -u". Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
由 Adam Lee 提交于
Note that since PyGreSQL 5.0 this method will return the values of array type columns as Python lists. ref: https://pygresql.org/contents/pg/query.html
-
- 03 8月, 2020 7 次提交
-
-
由 Hubert Zhang 提交于
return value of strcmp is not checked in some branches.
-
由 Hubert Zhang 提交于
conn = dbconn.connect() should be aligned with if statement, or it will never be executed.
-
由 Ning Yu 提交于
In a query that contains multiple init/sub plans, the packets of the second subplan might be received while the first is still being processed in the ic-proxy mode, this is because in ic-proxy mode a local host handshake is used instead of the global one. To distinguish the packets of different subplans, especially for the early coming ones, we must stop handling on the BYE immediately, and pass any unhandled early coming pkts to the successor or the placeholder. This fixes the random hanging during the ICW parallel group of qp_functions_in_from. No new test is added. Co-authored-by: NHubert Zhang <hzhang@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io>
-
由 Hubert Zhang 提交于
sizeof(HeapTuple *) should be sizeof(HeapTuple)
-
由 Hubert Zhang 提交于
Remove dead code. insertDesc is alwasy NULL in ao_vacuum_rel_compact()
-
由 Hubert Zhang 提交于
Remove unused varaible. For tuplesort.h, we doesn't support mksort based cluster, so we should just set is_mk_tuplesortstate to false
-
由 Hubert Zhang 提交于
Clean identical code in heap.c and analyze.c
-
- 01 8月, 2020 5 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Also, the entry for ExtprotocolRelationid was in wrong place in object_classes[]. It's a bit surprising that didn't cause any ill effects, but let's fix it in any case.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
It was added in commit 0138eed4, but was never used for anything.
-
由 Heikki Linnakangas 提交于
-
- 31 7月, 2020 5 次提交
-
-
由 Tyler Ramer 提交于
The command execution framework shipped with a fault injection in delivered code. See https://github.com/greenplum-db/gpdb/issues/10546 for execution details and implications. It seems the fault injection framework was added in 2009, used sparingly, and should be removed until it can be safely replaced. Additionally, the "gppylib/test/regress" folder used fault injector, but the "check-regress" target seems not to have been called - obvious because pygresql regression checks are present, but pygresql has not been in master for some time without causing any errors to these tests Authored-by: NTyler Ramer <tramer@vmware.com>
-
由 Hubert Zhang 提交于
Should check the len parameter of memcpy is not negative in gp_hyperloglog.c Should use errno instead of seekResult in cdbappendonlystorageread.c
-
由 Adam Lee 提交于
Unlogged tables do not propagate to replica servers, skip them and their initialization forks.
-
由 Hubert Zhang 提交于
Fix some resource leak.
-
由 Ashwin Agrawal 提交于
Adding pg_stat_clear_snapshot() in functions looping over gp_stat_replication / pg_stat_replication to refresh result everytime the query is run as part of same transaction. Without pg_stat_clear_snapshot() query result is not refreshed for pg_stat_activity neither for xx_stat_replication functions on multiple invocations inside a transaction. So, in absence of it the tests become flaky. Also, tests commit_blocking_on_standby and dtx_recovery_wait_lsn were initially committed with wrong expectations, hence were missing to test the intended behavior. Now reflect the correct expectation.
-