- 13 1月, 2018 31 次提交
-
-
由 Heikki Linnakangas 提交于
This hopefully fixes the gp_replica_check failures we're seeing in the pipeline.
-
由 Heikki Linnakangas 提交于
An empty segfile is mostly treated the same as a missing segfile, but for the sake of gp_replica_check, WAL-log the creation of an empty segfile anyway, so that there is no inconsistency between master and mirror, such that an empty segfile exists on master, but it's missing entirely in the mirror. (I'm not entirely sure if there is non-testing code that requires that, too, so better safe than sorry). This should fix the warnings like this: WARNING: Unable to open file /tmp/build/e18b2f02/gpdb_src/gpAux/gpdemo/datadirs/dbfast_mirror2/demoDataDir1/base/16384/61117.1152 from gp_replica_check. (There are other failures still.)
-
由 Heikki Linnakangas 提交于
If there is some unused space at the end of a WAL page, because we never split WAL record header, the WAL receiver's flush and apply positions were reported a bit funnily. The flush position would report the end of the page, including the unused padding, while the apply position would only go up to the end of last WAL record on the page, excluding the padding page. If you compare flush == apply positions, it would look as if not all of the WAL had been applied yet, even though the difference between the pointers was just the unused padding space. This will get fixed in PostgreSQL 9.3, where the padding at end of WAL page is eliminated, but until then, tweak the reporting of the apply position to also include any end-of-page padding. That makes the flush == apply comparison a valid way to check if all the flushed WAL has been applied, even at page boundaries. I believe this explains the "unable to obtain start synced LSN values between primary and mirror" failures we've been seeing from the gp_replica_check test. gp_replica_check waits for apply == flush, and if the last WAL record lands at a page boundary, that condition never became true because of the padding. (Although I'm not sure why it used to work earlier, or did it?)
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Because persistent tables are no more. NOTE: It would still be nice to check for consistency between pg_class and files on disk, to check that there are no extra data files, and no data files missing that have a pg_class entry. Same with AO seg files, I suppose. But that's a significantly different query than what we have here.
-
由 Ashwin Agrawal 提交于
Resolving the GPDB_84_MERGE_FIXME now, that we match close to upstream. Without thsi fix the relation files were not dropped during recovery or replay on mirrors.
-
由 Heikki Linnakangas 提交于
Because it's no longer created by the MMXLOG records. Alternatively, we could have a separate WAL record type for the creation. But this will do for now.
-
由 Heikki Linnakangas 提交于
* Need to set relFileNode field correctly in MirroredAppendOnlyOpen, along with the File descriptor itself. Otherwise the relfilenode is set incorrectly in WAL records. * Pretend that filespace location is always "tblspc_dummy_<tablespace oid>". The filespace/tablespace stuff is quite broken ATM, but hopefully this at least avoids some crashing.
-
由 Heikki Linnakangas 提交于
The fault injection points used in the test didn't exist anymore. Add a new injection point in RecordTransactionCommit(), just before writing the commit WAL record, and use that in the test. Remove a bunch of fault injection IDs that are no longer used. (They are still referenced in some TINC tests, but the injection points don't exist anymore, so those tests will need to be rewritten if we want to keep them.)
-
由 Heikki Linnakangas 提交于
It was done in the later startup passes, which were removed. Add the call to where it is in the upstream. Fix compiler warning about unused variable in the passing.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
* Revert almost all the changes in smgr.c / md.c, to not go through the Mirrored* APIs. * Remove mmxlog stuff. Use upstream "pending relation deletion" code instead. * Get rid of multiple startup passes. Now it's just a single pass like in the upstream. * Revert the way database drop/create are handled to the way it is in upstream. Doesn't use PT anymore, but accesses file system directly, and WAL-logs a single CREATE/DROP DATABASE WAL record. * Get rid of MirroredLock * Remove a few tests that were specific to persistent tables. * Plus a lot of little removals and reverts to upstream code.
-
由 Ashwin Agrawal 提交于
With file replication code removed, wal replication is only HA system for Greenplum now. Ideally wished to remove the config option as its no more a choice but keeping for now as all code checking for same needs to be modified anyways at some point.
-
由 Ashwin Agrawal 提交于
This just removes already decoupled code for filerep, lot more fts code needs to be cleaned-up for filerep removal.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Another tricky removal but got through.
-
由 Ashwin Agrawal 提交于
User of fault injector should set the fault at desired like master, primary or mirror. Doesn't seem underneath fault injector code to be so heavy in checking for right role.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
This was little painful one to entangle, but seems done now. Though if any shake-up happens shoudl be primary suspect.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Includes removal of changetrackingdump contrib module.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
- 12 1月, 2018 9 次提交
-
-
由 Heikki Linnakangas 提交于
To avoid being confused by a user-created function called "sum". Fixes github issue #4185.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
It wasn't very useful. The semaphore code is inherited from upstream, and not likely to break any time soon. If you need this information during debugging, use a debugger.
-
由 Lav Jain 提交于
* Remove PXF regression tests from master pipeline * Change regression_pxf test file name * Incorporate feedback * Update pxf_tarball location
-
由 Chris Hajas 提交于
Author: Chris Hajas <chajas@pivotal.io> Author: Karen Huddleston <khuddleston@pivotal.io>
-
由 Shreedhar Hardikar 提交于
This commit brings in ORCA changes that ensure that a Materialize node is not added under a Filter when its child contains outer references. Otherwise, the subplan is not rescanned (because it is under a Material), producing wrong results. A rescan is necessary for it evaluates the subplan for each of the outer referenced values. For example: ``` SELECT * FROM A,B WHERE EXISTS ( SELECT * FROM E WHERE E.j = A.j and B.i NOT IN ( SELECT E.i FROM E WHERE E.i != 10)); ``` For the above query ORCA produces a plan with two nested subplans: ``` Result Filter: (SubPlan 2) -> Gather Motion 3:1 -> Nested Loop Join Filter: true -> Broadcast Motion 3:3 -> Table Scan on a -> Table Scan on b SubPlan 2 -> Result Filter: public.c.j = $0 -> Materialize -> Result Filter: (SubPlan 1) -> Materialize -> Gather Motion 3:1 -> Table Scan on c SubPlan 1 -> Materialize -> Gather Motion 3:1 -> Table Scan on c Filter: i <> 10 ``` The Materialize node (on top of Filter with Subplan 1) has cdb_strict = true. The cdb_strict semantics dictate that when the Materialize is rescanned, instead of destroying its tuplestore, it resets the accessor pointer to the beginning and the subtree is NOT rescanned. So the entries from the first scan are returned for all future calls; i.e. the results depend on the first row output by the cross join. This causes wrong and non-deterministic results. Also, this commit reinstates this test in qp_correlated_query.sql. It also fixes another wrong result caused by the same issue. Note that the changes in rangefuncs_optimizer.out are because ORCA now no longer falls back for those queries. Instead it produces a plan which is executed on master (instead of the segments as was done by planner) which changes the error messages. Also bump ORCA version to 2.53.8. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Shoaib Lari 提交于
gpstart did a cluster-wide check of heap_checksum settings and refused to start the cluster if this setting was inconsistent. This meant a round of ssh'ing across the cluster which was causing OOM errors with large clusters. This commit moves the heap_checksum validation to gpsegstart.py, and changes the logic so that only those segments which have the same heap_checksum setting as master are started. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 dyozie 提交于
-