- 01 9月, 2016 3 次提交
-
-
由 Ashwin Agrawal 提交于
Refactor code to use common routine to fetch PT info for xlogging. Check can be easliy added at this common place to validate persistent info is available. Plus still add check during recovery for persistentTID zero. As with postgres upstream merges possible the function to populate persistent info is not called at all, so this check will not hit during xlog record construction but atleast gives clear clue during recovery.
-
由 Ashwin Agrawal 提交于
Vacuum lazy is harmless, but also no benefit to perform just extra work. Vacuum full could turn out dangerous as it has potential to move tuples around causing the TIDs for tuples to change, which violates its reference from gp_relation_node. In general persistent table has all frozen tuples so vacuum full is harmless too, but for example one scenario where it becomes dangerous is zero-page case due to failure after page extension but before page initialization. Also, since all the tuples in persistent table are frozen inserts, skip it from database age calculation.
-
由 Andreas Scherbaum 提交于
Close: #1057 Fix: #912
-
- 31 8月, 2016 5 次提交
-
-
由 Andreas Scherbaum 提交于
Closes: #1050 Closes: #1028
-
由 Kenan Yao 提交于
If a QE crashes for reasons such as SIGSEGV, SIGKILL or PANIC, segment postmaster reset fails sometimes. The root cause is: primary segment postmaster would first tell child processes to exit, then start a filerep peer reset process to instruct mirror postmaster do reset; the filerep peer reset process would only exit when mirror postmaster finishes or fails the reset procedure; primary postmaster would wait for the termination of important processes such as AutoVacuum, BgWriter, CheckPoint, filerep peer reset process etc, before it resets share memory and restarts auxiliary processes; however, in some cases, primary postmaster would be stuck in filerep peer reset step, if mirror postmaster is hanging/waiting for some events; if this happens, filerep peer reset process would wait there until timeout(1 hour), and retry 10 times before reports failure to primary postmaster (so 10 hours in total); so the final result is primary postmaster takes 10 hours to report reset failure. This happens almost every time on mirror segment host machine with poor performance for reasons that: mirror postmaster would do similar reset procedure with primary postmaster, i.e, notify child processes to exit and wait their terminations and then restart auxiliary processes; filerep peer reset process would first connect to mirror postmaster to request a postmaster reset, then it would check the reset status of mirror every 10ms by connecting to mirror postmaster; so it can happen that filerep peer reset process keeps connecting mirror postmaster, which would lead to continuous dead_end backend processes forked, while at the same time mirror postmaster waits for the exit of all dead_end backend processes, so it is possible that the speed of generating new dead_end processes exceeds the exit speed, and hence mirror postmaster can never see the clearance of child processes. All in all, this can lead to hang issue and failure of postmaster reset. This issue exists for master postmaster reset as well on heavy workload circumstances.
-
由 Kenan Yao 提交于
-
由 Kenan Yao 提交于
If a QD crashes for reasons such as SIGSEGV, SIGKILL or PANIC, postmaster reset fails sometimes. The root cause is: postmaster would first tell child processes to exit, and then wait for the termination of important processes such as AutoVacuum, BgWriter, CheckPoint etc, before it resets share memory and restarts auxiliary processes; however, WAL writer process is missed in the waiting list, so it can happen that postmaster spawns StartupProcess and then notices the exit of WAL writer, so it tells StartupProcess to exit; then postmaster would notice the abnormal exit of StartupProcess in turn, and treats it as recovery failure, then call exit() itself. Thus, we end up with no postmaster process on master node at all. This happens almost everytime when master host machine has poor performance.
-
由 Shreedhar Hardikar 提交于
Used when compiling generated code. EXPLAIN codegen also runs optimize with this optimize level, making it easier to see the features the compiler optimizes.
-
- 30 8月, 2016 1 次提交
-
-
由 Pengzhou Tang 提交于
standardize compile flags rules of pg_basebackup with other utils like psql, pg_ctl etc.
-
- 29 8月, 2016 19 次提交
-
-
由 Pengzhou Tang 提交于
isSimplyUpdatableRelation may get an invalid input from statements like "DECLARE XX CURSOR FOR SELECT * FROM LOWER('HH')" where a function is selected.
-
由 Pengzhou Tang 提交于
1. Remove retry mechanism for reader gang and non "in recovery mode" error, gp_segment_connect_timeout is default set to 10 mins, so it should be long enough to say we temporary lost the segments. 2. Fix "in recovery mode" retry mechanism, original codes can not recognize a in-recovery-mode error. 3. Add failure details. "failed to acquire resources on one or more segments" hide too many details.
-
由 Pengzhou Tang 提交于
1. Remove retry mechanism for reader gang and non "in recovery mode" error, gp_segment_connect_timeout is default set to 10 mins, so it should be long enough to say we temporary lost the segments. 2. Fix "in recovery mode" retry mechanism, original codes can not recognize a in-recovery-mode error. 3. Add failure details. "failed to acquire resources on one or more segments" hide too many details. 4. Only destroy all gangs when create writer gang failed, otherwise it may clean cursor opened gangs and cause unexpected error.
-
由 Pengzhou Tang 提交于
1.Fix primary writer gang leak: accidentally set PrimaryWriterGang to NULL which cause disconnectAndDestroyAllGangs() can not destroy primary writer gang. 2.Fix gang leak: when creating gang, if retry count exceed the limitation, forget to destroy the failed gang. 3.Remove duplicate sanity check before dispatchCommand(). 4.Remove unnecessary error-out when a broken Gang is no longer needed. 5.Fix thread leak problem 6.Enhance error handling for cdbdisp_finishCommand
-
由 Kuien Liu 提交于
The regression was executed serially from the beginning of construction, as time going, more tests are accumulated, which consume much time on Concourse/Pipeline. So we modify these tests and run them with parallel schedule.
-
由 Peifeng Qiu 提交于
Add regression tests on writing lots of files onto s3, join query between local table and s3 external table, mixed data format query (with different data format, CSV and TEXT, or TEXT with different delimiters). Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Haozhou Wang 提交于
1. change upload option -u -f to -u 2. update the usage help message 3. update gpcheckcloud regression tests Signed-off-by: Kuien Liu, Peifeng Qiu
-
由 Adam Lee 提交于
update the case which reads 5120 small files to read 2001 files.
-
由 Kuien Liu 提交于
Update gpcheckcloud configure template with autocompress, and clean some codes and comments. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Kuien Liu 提交于
If 'autocompress' in s3 configure file is set to 'true', all data will be compressed before uploaded to S3. In this way, we can reduce the network traffic significantly, which means money-saving as well. The data will be compressed in 'NO_FLUSH' way before injected into underlayer S3KeyWriter's buffer, and the latter will invoke RESTFul layer to finish data uploading. We don't buffer data issued from s3extprotocol in compression layer, that is, all data blocks will be injected into ZStream to deflate immediately. It is because experimental results show little performance improvement by doing this while consuming more memory. Signed-off-by: Peifeng Qiu, Adam Lee
-
由 Haozhou Wang 提交于
1. Support "CANCEL" during uploading, the uploaded parts in S3 will be safely deleted. 2. The s3ext will not retry 3 times after the query is canceled by users. 3. Error messages are fixed. Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
Support downloading and uploading with special characters in URL, such as "?&=:+". Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Kuien Liu 提交于
The extension of s3 files for writing is determined by the format of data source (from FORMAT clause) to replace the default 'data'. For example, file with '*.csv' is more friendly for users. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Adam Lee 提交于
1, fixed a bug in POST method 2, support uploading in gpcheckcloud 3, add regression cases for gpcheckcloud uploading Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Peifeng Qiu 提交于
add a set of url related utilites, such as getSchemaFromURL(), getRegionFromURL(), getBucketFromURL(), getPrefixFromURL(), replaceSchemaFromURL(), which are used in both s3writers and s3readers. merge URL parser and URL utilities into an unique s3url files. Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Peifeng Qiu 提交于
-
由 Peifeng Qiu 提交于
1. Refactor ListBucket function of S3Service, get rid of all pointers. Throw exception if listBucket() fails. 2. Refactor XML response message extraction code. 3. Fix unit tests. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Adam Lee 提交于
Replace path of WET tests as a date based random string, so we are able to run WET regression cases on different pipelines simultaneously. Also could configure s3 config file's path in Makefile now, which was hard coded in sql files. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
- 28 8月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
-
- 26 8月, 2016 8 次提交
-
-
由 Heikki Linnakangas 提交于
Gcc 6.1 complains about "tautological compare". Per the comment, the intention here is to unconditionally fail the assertion, so use a more straightforward Assert(false) to do that.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
While the compiler will do a perfectly good job of optimizing this the function doesn't add anything to readability of the code given how trivial the function is. Remove and inline the init code.
-
由 Daniel Gustafsson 提交于
The UPD signalling code was added due to a bug in the pthreads code in OS X 10.6 Snow Leopard which prevent pthread_cond_timedwait() to work as intended. This version of the OS was discontinued in 2011 so it's time to retire this work around. OS X is not a supported GPDB platform and as such we only make a best effort to run it on the currently latest version. In the process also make OS X use _timedwait() and not the relative function for simplicity.
-
由 Heikki Linnakangas 提交于
Commit 1eeea564 forgot to add the test queries to the ORCA-specific expected output file, causing the test to fail. But on closer inspection, there aren't any real differences between the ORCA and non-ORCA outputs, so let's just remove the alternative file.
-
由 Heikki Linnakangas 提交于
This seems to have been harmless, by pure chance. Passing the 0 (false) instead of -1 as the location would only affect the context information in error messages. Passing -1 as the boolean 'include_dropped' variable makes expandRTE to include dropped columns in the returned list, but that seems to be harmless too, given what the caller uses the list for. Nevertheless, it's clearly a bug, so fix it.
-
由 Heikki Linnakangas 提交于
Seems better to be precise with these. AOCS tables already used int8s for these, so this makes things more consistent, too.
-
- 25 8月, 2016 3 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Whitespace and comment fixes, to follow the usual project style. Remove duplicated function comments between the .h and .c files. Per the usual project convention, all explanations of a function and its arguments are in the .c file, and the .h file only contains the prototypes. There were some additional comments about the "sections" of the files that seemed useful, but were only in the .h files. I moved those to the .c files instead.
-
由 Heikki Linnakangas 提交于
Turns out that commit 6c025b52 subtly changed the CRC calculation. The old crc32cFinish() inline function returned the final checksum, while the new FIN_CRC32C() macro modifies the variable in place. The old calls to crc32cFinish() discarded the return value, and were therefore in fact no-ops. That was surely not intentional, but it doesn't make any difference to the strength of the checksum, so it doesn't seem worth changing from previous releases.
-