- 09 1月, 2019 15 次提交
-
-
由 Georgios Kokolatos 提交于
Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Heikki Linnakangas 提交于
cdbpath_motion_for_join() was sometimes returning an incorrect locus for a join between SingleQE and Hashed loci. This happened, when even the "last resort" strategy to move hashed side to the single QE failed. This can happen at least in the query that's added to the regression tests. The query involves a nested loop join path, when one side is a SingleQE locus and the other side is a Hashed locus, and there are no join predicates that can be used to determine the resulting locus. While we're at it, turn the Assertion that this tripped, and some related ones at the same place, into elog()s. No need to crash the whole server if the planner screws up, and it'd be good to perform these sanity checks in production, too. The failure of the "last resort" codepath was left unhandled by commit 0522e960. Fixes https://github.com/greenplum-db/gpdb/issues/6643. Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Yandong Yao 提交于
-
由 Richard Guo 提交于
The following identity holds true: (A antijoin B on (Pab)) innerjoin C on (Pac) = (A innerjoin C on (Pac)) antijoin B on (Pab) So we should not enforce join ordering for ANTI. Instead we need to collapse ANTI join nodes so that they participate fully in the join order search. For example: select * from a join b on a.i = b.i where not exists (select i from c where a.i = c.i); For this query, the origin join order is "(a innerjoin b) antijoin c". If we enforce ANTI join ordering, this will be the final join order. But another join order "(a antijoin c) innerjoin b" is also legal. We should take this order into consideration and pick a cheaper one. For LASJ, it is the same as ANTI joins. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Ashwin Agrawal 提交于
Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Ashwin Agrawal 提交于
Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Ashwin Agrawal 提交于
With this commit the QE in maintenance mode will ignore the distributed log and just pretend like single instance postgres. Without this if starting QE as single instance only, no distributed snapshot is executed. Due to this distributed oldest xmin points to oldest datfrozen_xid in system. As a result, vacuum any table results in HEAP_TUPLE_RECENTLY_DEAD and avoids cleaning up dead rows. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Pengzhou Tang 提交于
All lwlocks are stored in MainLWLockArray which is an array of LWLockPadded structures: typedef union LWLockPadded { LWLock lock; char pad[LWLOCK_PADDED_SIZE]; } LWLockPadded; The calculation in SyncHTPartLockId to fetch a lwlock is incorrect because it offsets the array as an LWLock array. In current code base, it works fine because the size of LWLock happens to be 32, if structure LWLock get enlarged, the calculation will mess up.
-
由 Pengzhou Tang 提交于
-
由 Pengzhou Tang 提交于
GPDB always set the REWIND flag for subplans include init plans, in 6195b967, we enhanced the restriction that if a node is not eager free, we cannot squelch a node earlier include init plans, this exposes a few hidden bugs: if init plan contains a motion node that needs to be squelched earlier, the whole query will get stuck in cdbdisp_checkDispatchResult() because some QEs are still keep sending tuples. To resolve this, we use DISPATCH_WAIT_FINISH mode for dispatcher to wait the dispatch results of init plan, init plan with motion is always executed on QD and should always be a SELECT-like plan, init plan must already fetched all the tuples it needed before dispatcher waiting for the QEs, DISPATCH_WAIT_FINISH is the right mode for init plan.
-
由 Ekta Khanna 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Ekta Khanna 提交于
As part of commit dc78e56c, logic for distributed snapshot was modified to use latestCompletedDxid. This changed the logic from xmax being inclusive range to not inclusive for visible transactions in snapshot. Hence, updating the check to return DISTRIBUTEDSNAPSHOT_COMMITTED_INPROGRESS even for transaction id equal to global xmax now. Other way to fix is using latestCompletedDxid without +1 for xmax, but better is to keep logic similar to local snapshot check and not have xmax in inclusive range of visible transactions. This was exposed in CI by test isolation/results/heap-repeatable-read-vacuum-freeze failing intermittently. This was due to isolation framework itself triggering query on pg_locks to check for deadlocks. This commit adds explicitely test to cover the scenario. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
With commit 8a11bfff, aggressive restart point creation is not performed in gpdb as well. Since CreateRestartPoint() is not coded to be called from startup process, GPDB specific code exception was added in past to work correctly for previous aggressive restart point creations, calls to which could happen via startup process. Now given only when gp_replica_check is running restartpoint is created on checkpoint record, which should be done via checkpointer process. Eliminate any case of calling CreateRestartPoint() from startup process and thereby remove GPDB added exception to CreateRestartPoint() and align to upstream code.
-
由 Heikki Linnakangas 提交于
The gptransfer behave test was using gpdiff to compare data between the source and target systems, and was relying on gpdiff to mask row order differences. However, after 1f44603a, gpdiff no longer recognized the results as psql result sets, because it did not echo the SELECT statements to the output. gpdiff expects to see those. Fix, by echoing the statements, like in pg_regress. That makes the output, if there are any differences, more readable anyway. While we're at it, change the gpdiff invocation to produce a unified diff. If the test fails, because there is a difference, that makes the output a lot more readable.
-
由 Heikki Linnakangas 提交于
The test was using "-- ignore", to cause gpdiff to ignore any differences in the test output. But after commit 1f44603a, gpdiff doesn't consider the test's output as a psql result set anymore, so the "-- ignore" directive doesn't work anymore. Use the more common "-- start_ignore"/"-- end_ignore" block instead. (I'm not sure how useful the test is, if we don't check the output, but thats a different story.)
-
- 08 1月, 2019 16 次提交
-
-
由 Pengzhou Tang 提交于
Previously, Even a connection has been explicitly set to inactive, the old code might still treat the connection as active if conn->cdbProc is not null and conn->sndQueue is not empty and increase activeCount. In the next loop, because conn->stillActive is true, conn->unackQueue or conn->sndQueue are never be freed and activeCount always be non-zeror and might cause an infinite loop.
-
由 Heikki Linnakangas 提交于
Commit 7d7782f1 changed the formatting of result sets slightly in isolation2 output. I missed changing these expected outputs in that commit.
-
由 Heikki Linnakangas 提交于
Improve the detection of the beginning of a result set. Previously, it would get confused by comments like "-------", which look a lot like the beginning of a single-column psql result set. That doesn't matter much, as long as the test is passing, but if such a test fails, the diff was very difficult to read, as atmsort reordered the SQL lines, too. Make the detection more resilient, by looking at the previous line. In a real psql result set, the previous line should be a header line, like " col1 | col2 ". A header line begins and ends with spaces, anything else means that we're seeing a SQL comment rather than a psql result set. While we're at it, if the "------" line has any leading or trailing whitespace, it's not a psql result set. I'm not sure why we were lenient on that, but let's make that more strict, too. Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
由 Heikki Linnakangas 提交于
Why, you might ask? The next commit will modify the code in gpdiff.pl, so that it doesn't get fooled by "----"-style comments, thinking that they are psql result sets. A side-effect of that is that it would also no longer recognize the result sets in the isolation2 output, without this patch.
-
由 Heikki Linnakangas 提交于
It was removed along with file replication, in commit 5c158ff3.
-
由 Heikki Linnakangas 提交于
These happen easily when merging code from upstream, that had already been backported earlier.
-
由 Heikki Linnakangas 提交于
To make merging and diffing with upstream easier.
-
由 Heikki Linnakangas 提交于
It is created by initdb these days, as hinted by the comment.
-
由 Heikki Linnakangas 提交于
In int8.c: No one compiles GPDB with !USE_FLOAT8_BYVAL. I guess it should work in theory, but it hasn't been tested for ages. This makes int8.c 100% identical to upstream In ruleutils.c: elog(ERROR) never returns, so this was dead code.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 BaiShaoqi 提交于
Use guc create_restarpoint_on_ckpt_record_replay to create a restartpoint immediately after replaying a checkpoint record (#6595) Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NZhenghua Lyu <zlv@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Paul Guo 提交于
Fix fts_recovery_in_progress test failure because pg_rewind runs before timeline id updating after mirror promotion. This issue was actually seen on PG upstream and was fixed by Heikki in pg_rewind test case. In gpdb, we are much more automatically using pg_rewind via gprecoverseg, so we'd better fix that in our own code (gprecoverseg or pg_rewind). On gpdb, after mirror promotion via fts, the time is possibly not short before a correct timeline id is flushed into the pg_control file. For such case, if we do incremental recovery via gprecoverseg, it will succeed but the target node (new mirror) is still not functional. commit 484a848a Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Thu Apr 30 21:57:18 2015 -0700 Fix pg_rewind regression failure after "fast promotion" pg_rewind looks at the control file to determine the server's timeline. If the standby performs a "fast promotion", the timeline ID in the control file is not updated until the next checkpoint. The startup process requests a checkpoint immediately after promotion, so this is unlikely to be an issue in the real world, but the regression suite ran pg_rewind so quickly after promotion that the checkpoint had not yet completed. Reported by Stephen Frost
-
由 Ashwin Agrawal 提交于
Without this commit, after initdb datfrozenxid for all databases remains 3. Ideally, databases should get freezed during initdb as tools like pg_upgrade make assumptions on the same. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - first HA updates that use WAL replication. --Removed references to filerep --Updated segment instance states to use WAL rep states --Other misc. updates. * docs - HA updates for WAL replication - review comment upates
-
由 Ashwin Agrawal 提交于
-
- 07 1月, 2019 7 次提交
-
-
由 Heikki Linnakangas 提交于
Merge the two concepts, squelching and eager-freeing. There is now only one function, ExecSquelchNode(), that you can call. It recurses to children, as before, but it now also performs eager-freeing of resources. Previously that was done as a separate call, but that was an unnecessary distinction, because all callers of ExecSquelchNode() also called ExecEagerFree() The concept of eager-freeing still lives on, as ExecEagerFree*() functions specific to many node types. But it no longer recurses! The pattern is that ExecSquelchNode() always performs eager freeing of the node, and also recurses. In addition to that, some node types also call the node-specific EagerFree function of the same node, after reaching the end of tuples. This makes it more clear which function should be called when. ExecEagerWalker() used to have special handling for the pattern of: Result -> Material -> Broadcast Motion I tried removing that, but then I started to get "illegal rescan of motion node" errors in the regression tests, from queries with a LIMIT node in a subquery. Upon closer look, I believe that was because the Limit node was calling ExecSquelchNode() on the input, even though the Limit node was marked as rescannable. To fix that, I added delayEagerFree logic to Limit node, to not call ExecSquelchNode() when the node might get rescanned later. The planstate_walk_kids() code did not know how to recurse into the children of a MergeAppend node. We missed adding that logic, when we merged the MergeAppend node type from upstream, in 9.1. We don't use that mechanism for recursing in ExecSquelchNode() anymore, but that probably should be fixed, anyway, as a separate commit later. Fixes https://github.com/greenplum-db/gpdb/issues/6602 and https://github.com/greenplum-db/gpdb/issues/6074. Reviewed-by: NTang Pengzhou <ptang@pivotal.io>
-
由 Heikki Linnakangas 提交于
The test was testing, that when ExecEagerFree() is called on a ShareInputScanState node, it calls ExecEagerFreeShareInputScan(). But that is trivially true, there is a direct call to ExecEagerFreeShareInputScan() in ExecEagerFree(). Seems pointless to have a test for it.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Mostly misspellings of "function".
-
由 Paul Guo 提交于
TemplateDbOid is for database template1, but template1 could be recreated and thus its oid is not longer TemplateDbOid. Also change to Use database postgres to find database oid and tablespace like what fts and gdd do. We should better use the same database for such purpose. I thought about which database gpdb should use: template1 or postgres. Both of them are template database, but it seems that some users or customers sometimes customize their own template1 database and template1 is the default template for database creating so if some auxiliary processes use that, "create database" command will fail. Using database postgres seems to be more reasonable. In addition, of course, users possibly drop the database postgres, but that is a rare case and they could easily recreate one for our purpose. We really do not need to over-design for such rare case.
-
由 Pengzhou Tang 提交于
-
由 Jinbao Chen 提交于
Serializable is not yet supported, so we need to fallback gucs 'transaction_isolation' and 'default_transaction_isolation' from serializable to repeatable read. Before, we just right fallback the transaction_isolation. For default_transaction_isolation, we only use the correct fallback when using the ‘SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL serializable’ statement. But we do not fallback when using 'SET default_transaction_isolation = 'serializable''. Add a check hook 'check_DefaultXactIsoLevel' to fix it.
-
- 05 1月, 2019 2 次提交
-
-
由 Heikki Linnakangas 提交于
We know the number of Motion nodes upfront, so we can allocate the 'mnEntries' array to the right size when we create the motion layer. No need to expand it incrementally. Reviewed-by: NTang Pengzhou <ptang@pivotal.io>
-
由 Chuck Litzell 提交于
* Adds window examples to query topic in admin guide * Fix some clear errors in CREATE TYPE and CREATE FUNCTION SQL * Changes from review comments * Add missing period
-