- 13 1月, 2018 19 次提交
-
-
由 Heikki Linnakangas 提交于
I removed the autoconf flag and #ifdefs earlier, but missed these.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
WAL replication is the name of the game on this branch.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
And clean up some comments that talked about persistent tables.
-
由 Heikki Linnakangas 提交于
They were not kept up-to-date anymore anyway. Remove the actual tables. There are still a few references to these tables in the management tools. AFAICS they're in tests, and I was hesitent to remove them just yet, in case we're going to use the existing tests as a guide when writing new tests.
-
由 Heikki Linnakangas 提交于
An AOCO table doesn't have a '0' segfile at all. Therefore, using smgrexists() to check if a relation exists on disk does not work.
-
由 Heikki Linnakangas 提交于
If there is some unused space at the end of a WAL page, because we never split WAL record header, the WAL receiver's flush and apply positions were reported a bit funnily. The flush position would report the end of the page, including the unused padding, while the apply position would only go up to the end of last WAL record on the page, excluding the padding page. If you compare flush == apply positions, it would look as if not all of the WAL had been applied yet, even though the difference between the pointers was just the unused padding space. This will get fixed in PostgreSQL 9.3, where the padding at end of WAL page is eliminated, but until then, tweak the reporting of the apply position to also include any end-of-page padding. That makes the flush == apply comparison a valid way to check if all the flushed WAL has been applied, even at page boundaries. I believe this explains the "unable to obtain start synced LSN values between primary and mirror" failures we've been seeing from the gp_replica_check test. gp_replica_check waits for apply == flush, and if the last WAL record lands at a page boundary, that condition never became true because of the padding. (Although I'm not sure why it used to work earlier, or did it?)
-
由 Ashwin Agrawal 提交于
Resolving the GPDB_84_MERGE_FIXME now, that we match close to upstream. Without thsi fix the relation files were not dropped during recovery or replay on mirrors.
-
由 Heikki Linnakangas 提交于
* Need to set relFileNode field correctly in MirroredAppendOnlyOpen, along with the File descriptor itself. Otherwise the relfilenode is set incorrectly in WAL records. * Pretend that filespace location is always "tblspc_dummy_<tablespace oid>". The filespace/tablespace stuff is quite broken ATM, but hopefully this at least avoids some crashing.
-
由 Heikki Linnakangas 提交于
The fault injection points used in the test didn't exist anymore. Add a new injection point in RecordTransactionCommit(), just before writing the commit WAL record, and use that in the test. Remove a bunch of fault injection IDs that are no longer used. (They are still referenced in some TINC tests, but the injection points don't exist anymore, so those tests will need to be rewritten if we want to keep them.)
-
由 Heikki Linnakangas 提交于
It was done in the later startup passes, which were removed. Add the call to where it is in the upstream. Fix compiler warning about unused variable in the passing.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
* Revert almost all the changes in smgr.c / md.c, to not go through the Mirrored* APIs. * Remove mmxlog stuff. Use upstream "pending relation deletion" code instead. * Get rid of multiple startup passes. Now it's just a single pass like in the upstream. * Revert the way database drop/create are handled to the way it is in upstream. Doesn't use PT anymore, but accesses file system directly, and WAL-logs a single CREATE/DROP DATABASE WAL record. * Get rid of MirroredLock * Remove a few tests that were specific to persistent tables. * Plus a lot of little removals and reverts to upstream code.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Another tricky removal but got through.
-
由 Ashwin Agrawal 提交于
This was little painful one to entangle, but seems done now. Though if any shake-up happens shoudl be primary suspect.
-
由 Ashwin Agrawal 提交于
Includes removal of changetrackingdump contrib module.
-
由 Ashwin Agrawal 提交于
-
- 02 1月, 2018 1 次提交
-
-
由 Heikki Linnakangas 提交于
AFAICS, this code isn't used for anything. It's a debugging utility, though, so maybe that's intentional. I think to use this, you're supposed to modify the source code at some place of interest, and add a debug_break() call there. However, I'm not aware of anyone using that. I just insert a sleep() or use a gdb breakpoint for that, when I'm debugging.
-
- 28 12月, 2017 1 次提交
-
-
由 Xin Zhang 提交于
If the first insert into AOCS table aborted, the visible blocks in the block directory should be greater than 1. By default, we initialize the `DatumStreamWriter` with `blockFirstRowNumber=1` for newly added columns. Hence, the first row numbers are not consistent between the visible blocks. This caused inconsistency between the base table scan vs. the scan using indexes through block directory. This wrong result issue is only happened to the first invisible blocks. The current code (`aocs_addcol_endblock()` called in `ATAocsWriteNewColumns()`) already handles other gaps after the first visible blocks. The fix updates the `blockFirstRowNumber` with `expectedFRN`, and hence fixed the mis-alignment of visible blocks. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
- 14 12月, 2017 3 次提交
-
-
由 Max Yang 提交于
possible memory leak. Author: Xiaoran Wang <xiwang@pivotal.io>
-
由 Taylor Vesely 提交于
In previous commit c9e2693c, the control file will be updated on the mirror when restart point is created. However, different from upstream, GPDB is running mirror under DB_IN_STANDY_MODE rather than DB_IN_ARCHIVE_RECOVERY mode in upstream. Hence, the control file was never updated when creating restart point. Author: Xin Zhang <xzhang@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Ashwin Agrawal 提交于
This code is hidden under GUC and never turned on, so no point keeping it. Was coded in past due to some inconsistency issues which have not surfaced from long time now. Plus, anyways PT needs to go away soon as well.
-
- 13 12月, 2017 1 次提交
-
-
由 Andreas Scherbaum 提交于
* Make SPI work with 64 bit counters Fix GET DIAGNOSTICS Remove the earlier introduced SPI_processed64 variable This includes the following upstream patches: https://github.com/greenplum-db/gpdb/commit/23a27b039d94ba359286694831eafe03cd970eef https://github.com/greenplum-db/gpdb/commit/f3f3aae4b7841f4dc51129691a7404a03eb55449 https://github.com/greenplum-db/gpdb/commit/ab737f6ba9fc0a26d32a95b115d5cd0e24a63191 The https://github.com/greenplum-db/gpdb/commit/74a379b984d4df91acec2436a16c51caee3526af is not yet included, because repalloc_huge() is not yet backported.
-
- 12 12月, 2017 4 次提交
-
-
由 Daniel Gustafsson 提交于
These error codes were marked as deprecated in September 2007 but the code didn't get the memo. Extend the deprecation into the code and actually replace the usage. Ten years seems long enough notice so also remove the renames, the odds of anyone using these in code which compiles against a 6X tree should be low (and easily fixed).
-
由 Heikki Linnakangas 提交于
The check was introduced by the 8.4 merge, but I had disabled it because we were tripping it. I re-enabled it in commit 1df4698a, thinking that the other changes in that commit made it work again, but we're seeing pipeline failures on many filerep-related test suites because of this. So disable it again. We really shouldn't hit that sanity check, but the current plan is to revamp this code greatly before the next release, as we are about to start replacing the GPDB-specific file replication with upstream WAL replication. Chances are that it will start to just work once that work is done, so I'm not going to spend any more time investigating this right now. Analysis by Jacob Champion.
-
由 Heikki Linnakangas 提交于
We should figure out how to make gp_replica_check more robust, so that it doesn't need the aggressive restartpointing. Is there any way to signal the mirror to do a restartpoint before doing the comparisons? But for now, this at least silences the failure, so that we can move on.
-
由 Heikki Linnakangas 提交于
These fields and code were in different order than in upstream. Move to where these are in PostgreSQL 8.4 / 9.0, to reduce our diff. This also re-enables one assertion that was added in the upstream in 8.4, but was disabled in the merge. Adding the LocalSetXLogInsertAllowed() call to before rm_cleanup()s made that assertion work again. We had missed that in the merge.
-
- 09 12月, 2017 1 次提交
-
-
由 Jacob Champion 提交于
Upstream commit 43a57cf3, which significantly changes the API for the HashBitmap (TIDBitmap in Postgres), is about to hit in an upcoming merge. This patch is a joint effort by myself, Max Yang, Xiaoran Wang, Heikki Linnakangas, and Daniel Gustafsson to reduce our diff against upstream and support the incoming API changes with our GPDB-specific customizations. The primary goal of this patch is to support concurrent iterations over a single StreamBitmap or TIDBitmap. GPDB has made significant changes to allow either one of those bitmap types to be iterated over without the caller necessarily needing to know which is which, and we've kept that property here. Here is the general list of changes: - Cherry-pick the following commit from upstream: commit 43a57cf3 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Sat Jan 10 21:08:36 2009 +0000 Revise the TIDBitmap API to support multiple concurrent iterations over a bitmap. This is extracted from Greg Stark's posix_fadvise patch; it seems worth committing separately, since it's potentially useful independently of posix_fadvise. - Revert as much as possible of the TIDBitmap API to the upstream version, to avoid unnecessary merge hazards in the future. - Add a tbm_generic_ version of the API to differentiate upstream's TIDBitmap-only API from ours. Both StreamBitmap and TIDBitmap can be passed to this version of the API. - Update each section of code to use the new generic API. - Fix up some memory management issues in bitmap.c that are now exacerbated by our changes.
-
- 08 12月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
Don't return NULL from bmgetbitmap() nor MultiExecBitmapAnd(). A NULL is not valid in the upstream, and it's better to avoid changing internal APIs like this, to reduce confusion and merge issues. Return an empty hash bitmap instead. Author: Xiaoran Wang <xiwang@pivotal.io> Author: Max Yang <myang@pivotal.io>
-
由 Ashwin Agrawal 提交于
Seems this was missed while back-porting wal replication and due to this, latch was never signaled to startup process. Instead replay currently always happened due to latch time-out of 5 seconds and never due to walreceiver signaling of wal arrival. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
- 07 12月, 2017 1 次提交
-
-
由 Pengzhou Tang 提交于
In GPDB, a web external scan is mainly divided to 3 steps: 1. url_execute_fopen () forks a child process and creates a pipe so that child process can execute a command and sends the output back through the pipe. 2. url_execute_fread () reads data from the pipe until child closed the peer of the pipe. 3. url_execute_fclose () close the pipe firstly, then wait for the child process to exit if failOnError is true. However, for queries with LIMIT clause, QE may receive a query finish signal after url_execute_fopen () and url_execute_fread() may be skipped which means parent may close read peer of the pipe before child writing some data and child will exit with SIGPIPE error. To fixed this, we set failOnError to false if QueryFinishPending is true so that any errors when closing external file will be ignored, QueryFinishPending means QD have got enough tuples and query can return correctly, so it should be fine to ignore the error in such case. This fixes issue #4064
-
- 04 12月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
-
- 02 12月, 2017 1 次提交
-
-
由 Ivan Leskin 提交于
Add a new compression option for append-optimized tables, "zstd". It is generally faster than zlib or quicklz, and compresses better. Or at least it can be faster or compress better, if not both at the same time, by adjusting the compression level. A major advantage of Zstandard is the wide range of tuning, to choose the trade-off between compression speed and ratio. Update documentation to mention "zstd" alongside "zlib" and "quicklz". More could be done; all the examples still use zlib or quicklz, for example, and I think we want to emphasize Zstandard more in the docs, over those other options. But this is the bare minimum to keep the docs factually correct. Using the new option requires building the server with the libzstd library. A new --with-zstd option is added for that. The default is to build without libzstd, for now, but we should probably change the default to be on, after we have had a chance to update all the buildfarm machines to have libzstd. Patch by Ivan Leskin, Dmitriy Pavlov, Anton Chevychalov. Test case, docs changes, and some minor editorialization by Heikki Linnakangas.
-
- 01 12月, 2017 1 次提交
-
-
由 Amil Khanzada 提交于
- As part of determining the resource group that a transaction should be assigned to, AssignResGroupOnMaster() calls GetResGroupIdForRole(), which queries a syscache on the catalog table pg_authid, which maps users to resource groups. - Prior to this commit, AssignResGroupOnMaster() was doing the queries on pg_authid near the top of StartTransaction() before the per-transaction memory context was set up. This required GetResGroupIdForRole() to run ResourceOwnerCreate() to avoid segfaulting gpdb and also led to many potential issues: * unknown behavior if a relcache invalidation event happens on pg_authid's syscache * possible stale pg_authid entries, as access done with SnapshotNow and out-of-date RecentGlobalXmin * memory leaks due to no memory context * uphill battle as newer version of PostgreSQL remove SnapshotNow and assume catalog lookups only happen when transactions are open Signed-off-by: NDavid Sharp <dsharp@pivotal.io> Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io>
-
- 30 11月, 2017 3 次提交
-
-
由 Jimmy Yih 提交于
MMXLOG records should only be replayed in standby mode and smgr create xlog records should not do anything until persistent tables and MMXLOG records are removed from Greenplum. Author: Jimmy Yih <jyih@pivotal.io> Author: Asim Praveen <apraveen@pivotal.io>
-
由 Tom Lane 提交于
This resulted in useless extra work during every call of parseRelOptions, but no bad effects other than that. Noted by Alvaro. (Cherry-picked from PostgreSQL commit eb9954e362)
-
由 Heikki Linnakangas 提交于
The options should only be registered once. Surprisingly, the duplicates seem to be harmless, because everything worked, but the accumulation of more and more reloptions was slowing down tests that created a lot of tables.
-
- 29 11月, 2017 1 次提交
-
-
由 Tom Lane 提交于
"all tuples visible" flag in heap page headers. The flag update *must* be applied before calling XLogInsert, but heap_update and the tuple moving routines in VACUUM FULL were ignoring this rule. A crash and replay could therefore leave the flag incorrectly set, causing rows to appear visible in seqscans when they should not be. This might explain recent reports of data corruption from Jeff Ross and others. In passing, do a bit of editorialization on comments in visibilitymap.c. (This is a cherry-pick of upstream PostgreSQL commit fedb166549. We bumped into this bug in the "test_crash_recovery_schema_topology.py" tests in the concourse pipeline.)
-