- 03 1月, 2020 2 次提交
-
-
由 Ashwin Agrawal 提交于
alter_db_set_tablespace test has scenarios to inject error fault for content 0. Then run ALTER DATABASE SET TABLESPACE command. Once error is hit on content 0, the transaction is aborted. Based on when the transaction gets aborted, its unpredictable what point the command has reached for non-content 0 primaries. If non-content 0 primaries, have reached the point of directory copy, then only abort record for them will have database directory deletion record to be replayed on mirror, else not. The test was waiting for directory deletion fault to be triggered for all the content mirrors. This expectation is incorrect and makes test flaky based on timing. Hence, modifying the test for error scenarios to only wait for directory deletion for content 0. Then wait for all the mirrors to replay all the currently generated wal records, post which make sure destination directory is empty. This should eliminate the flakiness from the test. Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
由 Ashwin Agrawal 提交于
gpdeletesystem uses GpDirsExist() to check if dump directories are present to warn and avoid deleting the cluster. Only if "-f" option is used allowed to delete the cluster with dump directories present. Though this function incorrectly checks for files and directories with name "*dump*" and not just directories. So, gpdeletesystem started failing after commit eb036ac1. FTS writes file with name of file as `gpsegconfig_dump`. GpDirsExist() incorrectly reports this as backup directory present and fails. Fix the same by only checking for directories and not files. Fixes https://github.com/greenplum-db/gpdb/issues/8442Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
- 02 1月, 2020 1 次提交
-
-
由 (Jerome)Junfeng Yang 提交于
The error code should not set twice with different codes in `errfinish_and_return`. Since the order of function's parameters is dependent on the compiler. If the final error code is ERRCODE_INTERNAL_ERROR, file name and line number print out. ``` ERROR: Error on receive from SEG IP:PORT pid=PID: *** (cdbdispatchresult.c:487) ``` It's strange to print out the file name and line number here. So, remove ERRCODE_INTERNAL_ERROR, and only keep ERRCODE_GP_INTERCONNECTION_ERROR, which is also the error code before commit: 143bb7c6. Reviewed-by: NPaul Guo <paulguo@gmail.com> (cherry picked from commit 55d6415b)
-
- 01 1月, 2020 1 次提交
-
-
由 Ashwin Agrawal 提交于
Test should make sure mirror has processed the drop database wal record before proceeding to perform check for destination tablespace directory non-existence. It skipped performing the wait for content 0, incase of panic after writing wal record, that's incorrect. Adding to wait for all the mirror's to process the wal record and then only perform the validation. This should fix the failures seen in CI with below diff ``` --- /tmp/build/e18b2f02/gpdb_src/src/test/regress/expected/alter_db_set_tablespace.out 2019-10-14 16:09:43.638372174 +0000 +++ /tmp/build/e18b2f02/gpdb_src/src/test/regress/results/alter_db_set_tablespace.out 2019-10-14 16:09:43.714379108 +0000 @@ -1262,25 +1271,352 @@ CONTEXT: PL/Python function "stat_db_objects" NOTICE: dboid dir for database alter_db does not exist on dbid = 4 CONTEXT: PL/Python function "stat_db_objects" -NOTICE: dboid dir for database alter_db does not exist on dbid = 5 -CONTEXT: PL/Python function "stat_db_objects" NOTICE: dboid dir for database alter_db does not exist on dbid = 6 CONTEXT: PL/Python function "stat_db_objects" NOTICE: dboid dir for database alter_db does not exist on dbid = 7 CONTEXT: PL/Python function "stat_db_objects" NOTICE: dboid dir for database alter_db does not exist on dbid = 8 CONTEXT: PL/Python function "stat_db_objects" - dbid | relfilenode_dboid_relative_path | size -------+---------------------------------+------ - 1 | | - 2 | | - 3 | | - 4 | | - 5 | | - 6 | | - 7 | | - 8 | | -(8 rows) + dbid | relfilenode_dboid_relative_path | size +------+---------------------------------+-------- + 1 | | + 2 | | + 3 | | + 4 | | + 5 | 180273/112 | 32768 + 5 | 180273/113 | 32768 + 5 | 180273/12390 | 65536 + 5 | 180273/12390_fsm | 98304 <....choping output as very long...> + 5 | 180273/PG_VERSION | 4 + 5 | 180273/pg_filenode.map | 1024 + 6 | | + 7 | | + 8 | | +(337 rows) ``` Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
- 31 12月, 2019 1 次提交
-
-
由 Paul Guo 提交于
This helps script handling by checking return values. Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
- 30 12月, 2019 1 次提交
-
-
由 xiong-gang 提交于
When 'CopyReadLineText' find a broken end-of-copy marker, it errors out without setting the current index in the buffer. In the case of 'reject limit' is set, copy will process the line again.
-
- 28 12月, 2019 1 次提交
-
-
由 Ashwin Agrawal 提交于
Based on reports from field for GPDB, 1 min of wal_sender_timeout GUC is causing primary to terminate the replication connection too often in heavy workload situations. This causes mirror to be marked down and piles up WAL on primary. This is moslty seen in configurations where fsync takes long time on mirrors. Hence, would be helpful to have higher default value of this GUC to avoid unnecessary marking mirror down situations. Only downside of this change would be when connection between primary and mirror exist but for some reason mirror doesn't respond, it will be detected little later compared to previous 1 min timeout. But 1 min timeout is causing major downside and mirrors need to be manually recovered after being marked down. Hence, its desirable to not falsely break the connectiion due to timeout. Increasing the timeout to 5 mins is just a educated guess as its hard to come up with reasonable default, but bumping the value is desired based on inputs. Reviewed-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
-
- 27 12月, 2019 2 次提交
-
-
由 Zhenghua Lyu 提交于
Previous commit c9655e2d forgot to check tuple locality for update. This commit add check tuple locality for ExecUpdate and refactor test cases.
-
由 Chuck Litzell 提交于
-
- 26 12月, 2019 3 次提交
-
-
由 Ning Yu 提交于
ZSTD creates CCtx and DCtx with malloc() by default, a NULL pointer will be returned on OOM, the callers must check for NULL pointers. Also fixed a typo in the comment. Fixes: https://github.com/greenplum-db/gpdb/issues/9294 Reported-by: shellboy Reviewed-by: NZhenghua Lyu <zlv@pivotal.io> (cherry picked from commit d74aa39f)
-
由 Paul Guo 提交于
We recently start to control wal write burst by calling SyncRepWaitForLSN() more frequently, then the changes cause the segspace test flaky. In the segspace test case, there is an inject fault (exec_hashjoin_new_batch) with interrupt event, this makes the test easier to have the cancel event seen in SyncRepWaitForLSN() and then cause additional outputs sometimes. Fixing this by disabling the current cancel handling code if it is not a commit call of SyncRepWaitForLSN(). Here is the diff of the test failure. begin; insert into segspace_t1_created SELECT t1.* FROM segspace_test_hj_skew AS t1, segspace_test_hj_skew AS t2 WHERE t1.i1=t2.i2; +DETAIL: The transaction has already changed locally, it has to be replicated to standby. ERROR: canceling MPP operation +WARNING: ignoring query cancel request for synchronous replication to ensure cluster consistency rollback; Cherry-picked from 84642c4b Besides, add the commit parameter in SyncRepWaitForLSN() following the master code. Checked the related upstream patch. Adding the new parameter in current gpdb version should be fine.
-
由 Ning Yu 提交于
The alter_db_set_tablespace test has been flaky for a long time, one typical failure is like this: --- /regress/expected/alter_db_set_tablespace.out +++ /regress/results/alter_db_set_tablespace.out @@ -1204,21 +1213,348 @@ NOTICE: dboid dir for database alter_db does not exist on dbid = 2 NOTICE: dboid dir for database alter_db does not exist on dbid = 3 NOTICE: dboid dir for database alter_db does not exist on dbid = 4 -NOTICE: dboid dir for database alter_db does not exist on dbid = 5 NOTICE: dboid dir for database alter_db does not exist on dbid = 6 NOTICE: dboid dir for database alter_db does not exist on dbid = 7 NOTICE: dboid dir for database alter_db does not exist on dbid = 8 The test disables fts probing with fault injection, however it does not wait for the fault to be triggered. The other problem is that the fts probing was disabled after the PANIC, that might not be in time. So the problem was that we were having a scenario where we were injecting the fault after the fts loop was beyond the fault point and then when the subsequent PANIC was caused, fts was still active. By manually triggering, and then by waiting to ensure that the fault is hit at least once, we can guarantee that the scenario described above doesn't happen. Reviewed-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Reviewed-by: NTaylor Vesely <tvesely@pivotal.io> Reviewed-by: NHubert Zhang <hzhang@pivotal.io> (cherry picked from commit 54e3af6d)
-
- 24 12月, 2019 3 次提交
-
-
由 (Jerome)Junfeng Yang 提交于
For below external table: ``` CREATE EXTERNAL WEB TABLE web_ext ( junk text) execute 'echo hi' on master FORMAT 'text' (delimiter 'OFF' null E'\\N' escape E'\\'); ``` When querying the table, an unexpected error happens: ``` SELECT * FROM web_ext; ERROR: using no delimiter is only supported for external tables ``` Since external scan calls BeginCopyFrom to init CopyStateData. When `ProcessCopyOptions` in `BeginCopy`, the relation may be an external relation. The fix checks whether the relation is an external relation and if yes, set the correct parameters for `ProcessCopyOptions`.
-
由 Ashwin Agrawal 提交于
Similar to commit 8c40565a, apply same change to commit_blocking_on_standby test as well. Unnecessary to check sync_state and it makes the test flaky, only way would be add reties instead not checking for same is better.
-
由 Ashwin Agrawal 提交于
This test fails sometimes with below diff ``` -- Sync state between master and standby must be restored at the end. select application_name, state, sync_state from pg_stat_replication; application_name | state | sync_state ------------------+-----------+------------ - gp_walreceiver | streaming | sync + gp_walreceiver | streaming | async (1 row) ``` The reason being, if this query is excuted in window between when standby is created and changes state to streaming but yet to set flush to valid location based on reply from standby. pg_stat_get_wal_senders() reports sync_state as "async", if flush location is invalid pointer. Hence, we get the above diff sometimes in test based on timing. To fix the same, removing the sync_state field from above query for this test. As in GPDB we always create standby as sync only, secondly "state" is giving us what we wish to check here, which is if standby is up and running or not. Checking for sync_state is unnecessary and hence avoid the same and make test stable. If we have to keep the sync_state, would have to add unnecessary retry logic for this query.
-
- 23 12月, 2019 3 次提交
-
-
由 Zhenghua Lyu 提交于
transformRelOptions may return a null pointer in some cases, add the check in function `add_partition_rule`.
-
由 Heikki Linnakangas 提交于
The Motion sender code has four different codepaths for serializing a tuple from the input slot: 1. Fetch MemTuple from slot, copy it out as it is. 2. Fetch MemTuple from slot, re-format it into a new MemTuple by fetching and inlining any toasted datums. Copy out the re-formatted MemTuple. 3. Fetch HeapTuple from slot, copy it out as it is. 4. Fetch HeapTuple from slot, copy out each attribute separately, fetching and inlining any toasted datums. In addition to the above, there are "direct" versions of codepaths 1 and 3, used when the tuple fits in the caller-provided output buffer. As discussed in https://github.com/greenplum-db/gpdb/issues/9253, the fourth codepath is very inefficient, if the input tuple contains datums that are compressed inline, but not toasted. We decompress such tuples before serializing, and in the worst case, might need to recompress them again in the receiver if it's written out to a table. I tried to fix that in commit 4c7f6cf7, but it was broken and was reverted in commit 774613a8. This is a new attempt at fixing the issue. This commit removes codepath 4. altogether, so that if the input tuple is a HeapTuple with any toasted attributes, it is first converted to a MemTuple and codepath 2 is used to serialize it. That way, we have less code to test, and materializing a MemTuple is roughly as fast as the old code to write out the attributes of a HeapTuple one by one, except that the MemTuple codepath avoids the decompression of already-compressed datums. While we're at it, add some tests for the various codepaths through SerializeTuple(). To test the performance of the affected case, where the input tuple is a HeapTuple with toasted datums, I used this: --- CREATE temporary TABLE foo (a text, b text, c text, d text, e text, f text, g text, h text, i text, j text, k text, l text, m text, n text, o text, p text, q text, r text, s text, t text, u text, v text, w text, x text, y text, z text, large text); ALTER TABLE foo ALTER COLUMN large SET STORAGE external; INSERT INTO foo SELECT 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', repeat('1234567890', 1000) FROM generate_series(1, 10000); -- verify that the data is uncompressed, should be about 110 MB. SELECT pg_total_relation_size('foo'); \o /dev/null \timing on SELECT * FROM foo; -- repeat a few times --- The last select took about 380 ms on my laptop, with or without this patch. So the new codepath where the input HeapTuple is converted to a MemTuple first, is about as fast as the old method. There might be small differences in the serialized size of the tuple, too, but I didn't explicitly measure that. If you have a toasted but not compressed datum, the input must be quite large, so small differences in the datum header sizes shouldn't matter much. If the input HeapTuple contains any compressed datums, this avoids the recompression, so even if converting to a MemTuple was somewhat slower in that case, it should still be much better than before. I kept the HeapTuple codepath for the case that there are no toasted datums. I'm not sure it's significantly faster than converting to a MemTuple either; the caller has to slot_deform_tuple() the received tuple before it can do much with it, and that is slower with HeapTuples than MemTuples. But that codepath is straightforward enough that getting rid of it wouldn't save much code, and I don't feel like doing the performance testing to justify it right now. Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
由 Heikki Linnakangas 提交于
It cannot return NULL. It will either return a valid pointer, or the palloc() will ERROR out. Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
- 21 12月, 2019 3 次提交
-
-
由 ggbq 提交于
Master having mode as COPY_DISPATCH incorrectly allocated the TupleTableSlot for a new ResultRelInfo of one partition on the per-tuple memory context. This can happen in the existence of ResultRelInfo::ri_partInsertMap. This causes crash because the per-tuple context will be reset for each tuple iteration. It should be using the per-query context. Reproduce the crash using the following SQL commands: DROP TABLE IF EXISTS partition_test; CREATE TABLE partition_test ( id INT, tm TIMESTAMP ) DISTRIBUTED BY (id) PARTITION BY RANGE(tm) ( PARTITION p2019 START ('2019-01-01'::TIMESTAMP) END ('2020-01-01'::TIMESTAMP), DEFAULT PARTITION extra ); ALTER TABLE partition_test ADD COLUMN dd TIMESTAMP; ALTER TABLE partition_test DROP COLUMN dd; ALTER TABLE partition_test ADD COLUMN dd TEXT; ALTER TABLE partition_test SPLIT DEFAULT PARTITION START ('2020-01-01'::TIMESTAMP) END ('2021-01-01'::TIMESTAMP) INTO (PARTITION p2020, DEFAULT PARTITION); COPY (SELECT generate_series, '2020-12-20'::TIMESTAMP, 'ABCDEF' FROM generate_series(1, 10000)) TO '/tmp/partition_test.txt'; COPY partition_test FROM '/tmp/partition_test.txt'; Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Heikki Linnakangas 提交于
Commit 589c737e bumped the expected ORCA version number to 3.86, but forgot to update the error message.
-
- 20 12月, 2019 5 次提交
-
-
由 Hao Wu 提交于
In this PR(https://github.com/greenplum-db/gpdb/pull/9248), we set the default value of wal_keep_segments to 0, the same as upstream, because we have replication slot to avoid removal of WAL files required by the mirror. It seems to be fine. But there is no replication slot for master/standby now. It's unsafe to remove WAL files if the file was required by the standby. So, for now, before the replication slot is added to master, let's set the default value of wal_keep_segments to 5. (cherry picked from commit 3ce78553)
-
由 Ashwin Agrawal 提交于
To cover for Alter Table and CTAS, heap_insert() is common place, so we felt better to have the call in heap_insert() instead of spreading calls specifically only for those two functionalities. Vacuum full uses cluster code, we have placed call separately for vacuum full, which also covers cluster. Lazy vacuum needed separate call as well. Co-authored-by: NAdam Lee <ali@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io> (cherry picked from commit 22b4073d)
-
由 Ashwin Agrawal 提交于
Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io> (cherry picked from commit eae1c6ef)
-
由 Ashwin Agrawal 提交于
Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io> (cherry picked from commit cf254d1d)
-
由 Ashwin Agrawal 提交于
Transactions on commit, wait for replication and make sure WAL is flushed up to commit lsn on mirror in GPDB. While commit is madatory sync/wait point, waiting for replication at some periodic intervals even before that may be desirable/efficient to act as good citizen in system. Consider for example setup where primary and mirror can write at 20GB/sec, while network between them can only transfer at 2GB/sec. Now if CTAS is run in such setup for large table, it can generate WAL very accresively on primary, but can't be transfered at that rate to mirror. Hence, there would be pending WAL build-up on primary. This exhibits two main things: - new write transactions (even if single tuple I/U/D), would exhibit latency for amount of time equivalent to the pending WAL to be shipped and flushed to mirror - primary needs to have space to hold that much WAL, since till the WAL is not shipped to mirror, it can't be recycled So, to make the situation better instead of waiting for mirror only at commit point, need way to avoid primary racing to forward with WAL generation and instead have way to move large transactions at more sustained speed with network and mirrors. This will help to avoid bulk transactions starving concurrent transactions from commiting due to sync rep. Adding global (backend local) variable, which tracks amount of wal written by transaction. Interface `wait_to_avoid_large_repl_lag()` which can be called at strategic points to wait for replication. This interface if threshold amount of WAL is written (defined by a new GUC) by transaction, calls SyncRepWaitForLSN() with LSN equal to cached value of WAL flush point. So, using this interface large WAL generation transactions can wait for replication based on amount of WAL written by them much before reaching commit point as well. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/3qMsyIj3ikA/bcioZv8wAQAJReviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io> (cherry picked from commit 0aec3c8f)
-
- 19 12月, 2019 5 次提交
-
-
由 Paul Guo 提交于
Replace ExecFetchSlotHeapTuple()+heap_copytuple() with ExecCopySlotHeapTuple() in CopyFrom() (#9261) This saves a memory copy of tuple length for each tuple handling. I did not see big improvement with 'copy from' perf testing with this change ( 1%+ avg. running time reduction of 20 runs with table lenth 1k+) but it is still helpful. Reviewed-by: Ashwin Agrawal Cherry-picked from 9a0e6ab6
-
由 Ashwin Agrawal 提交于
This is cherry-pick of upstream commit a1a789eb, slightly modified to work with current GPDB code based of 9.4. Helps to fix the deadlocks seen in CI for walreceiver. ------------------- Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Mon Apr 29 12:26:07 2019 -0400 In walreceiver, don't try to do ereport() in a signal handler. This is quite unsafe, even for the case of ereport(FATAL) where we won't return control to the interrupted code, and despite this code's use of a flag to restrict the areas where we'd try to do it. It's possible for example that we interrupt malloc or free while that's holding a lock that's meant to protect against cross-thread interference. Then, any attempt to do malloc or free within ereport() will result in a deadlock, preventing the walreceiver process from exiting in response to SIGTERM. We hypothesize that this explains some hard-to-reproduce failures seen in the buildfarm. Hence, get rid of the immediate-exit code in WalRcvShutdownHandler, as well as the logic associated with WalRcvImmediateInterruptOK. Instead, we need to take care that potentially-blocking operations in the walreceiver's data transmission logic (libpqwalreceiver.c) will respond reasonably promptly to the process's latch becoming set and then call ProcessWalRcvInterrupts. Much of the needed code for that was already present in libpqwalreceiver.c. I refactored things a bit so that all the uses of PQgetResult use latch-aware waiting, but didn't need to do much more. These changes should be enough to ensure that libpqwalreceiver.c will respond promptly to SIGTERM whenever it's waiting to receive data. In principle, it could block for a long time while waiting to send data too, and this patch does nothing to guard against that. I think that that hazard is mostly theoretical though: such blocking should occur only if we fill the kernel's data transmission buffers, and we don't generally send enough data to make that happen without waiting for input. If we find out that the hazard isn't just theoretical, we could fix it by using PQsetnonblocking, but that would require more ticklish changes than I care to make now. This is a bug fix, but it seems like too big a change to push into the back branches without much more testing than there's time for right now. Perhaps we'll back-patch once we have more confidence in the change. Patch by me; thanks to Thomas Munro for review. Discussion: https://postgr.es/m/20190416070119.GK2673@paquier.xyz -------------------
-
由 Robert Mu 提交于
-
由 David Yozie 提交于
-
由 Adam Berlin 提交于
-
- 18 12月, 2019 3 次提交
-
-
由 Asim R P 提交于
This reverts commit c34e76e4. Thank you Ekta for finding this simple repro that demonstrates the problem with this patch and Jesse for initial analysis: CREATE TABLE foo(a text, b text); INSERT INTO foo SELECT repeat('123456789', 100000)::text as a, repeat('123456789', 10)::text as b; SELECT * FROM foo; The motion receiver has no idea whether a datum it received is compressed or not, because the varlena header is stripped off before sending the data. Heikki and I discussed two options to fix this: 1. Include varlena header when sending. This incurs at the most 8-byte overhead per variable length datum in a heap tuple. 2. Always send tuples as MemTuples. This is more desirable because it simplifies code, but also comes with performance cost. Let's evaluate the two options based on performance and then commit the best one.
-
由 Hao Wu 提交于
* Revert "Increase default value of wal_keep_segments GUC." This reverts commit dd18c4a0. We have replication slot now, wal_keep_segments is not needed for safely removing WAL files. * Rewrite test missing_xlog Old missing_xlog tries to test an error happened on the primary when requesting for WAL files from the mirror are unavailable. To simulate this scenario, the test moves WAL files to other folder, encounter the error and moves them back. But it may damage the WAL files on the primary, because it assumes that there is no WAL write between the 2 movings of WAL files. Unluckily, the assumption is false. The hint bits could trigger a WAL write request(FPI_FOR_HINT), and the hot standby may also requst a WAL write. So moving WAL files is not recommended. To make `sync_error` occur, we set `wal_keep_segments` to 0, i.e. disable this feature. And in order to temporarily break the constraint of replication slot, to delete WAL files used by the mirror, we use fault inject. In the end, the mirror loses some WAL file to do incremently recovery, we do full recovery for the mirror. * Add 'wal_keep_segments = 5' to configuration of test pg_rewind Since wal_keep_segments is set to 0 by default, WAL files are only guarded by replication slot. In test case pg_rewind, there is no replication slot found now. A workaround way is to set wal_keep_segments back to 5 in the test case. Co-authored-by: NAsim R P <apraveen@pivotal.io> (cherry picked from commit 06686422)
-
由 Lisa Owen 提交于
* docs - updates to configuring windows client for kerberos auth docs * need to set PGGSSLIB before running greenplum_clients_path.bat * add the AD info back into the file, but comment out * misc edit * identify config step for custom kerb config file location * include automated task example in generating keytab section * note that an additional step requires admin privileges * reinstate, but comment out, an AD statement * remove dangling reference to AD user name
-
- 17 12月, 2019 6 次提交
-
-
由 Heikki Linnakangas 提交于
If a datum is toasted and compressed, only detoast it before serializing. No need to decompress, the receiver can decompress it if needed. That's a big win, if the receiver is just going to store the value back to disk, and doesn't need to decompress it at all. The corresponding codepath for MemTuples was already doing this, within memtuple_form_to(). Addresses github issue https://github.com/greenplum-db/gpdb/issues/9253Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
由 Роман Зотов 提交于
MAXALIGN(datum_size) can be greater than BUFFER_INCREMENT_SIZE, hence when allocating the buffer need to make sure buffer has enough space for writing the next datum. (cherry picked from commit bb99c441)
-
由 Mel Kiyama 提交于
* docs - clarify and update GPDB init. instructions --clarify greenplum_path.sh must be added to login script. --add links to setting GPDB env. vars. section --Move logging in as gpadmin and sourcing greenplum_path.sh to a single location --Clean up xrefs * docs - review comment updates
-
由 Jimmy Yih 提交于
After pg_get_viewdef() was fixed to handle views defined with gp_dist_random(), the binary swap test started to fail because it had one of those views being incorrectly dumped. This is an expected failure and we'll need to remove the view before running the binary swap test. Expected failure diff: ``` < FROM ONLY public.locktest_segments_dist --- > FROM gp_dist_random('public.locktest_segments_dist') ``` GPDB Reference: https://github.com/greenplum-db/gpdb/commit/7af86a0bf2871a9e
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-