- 31 8月, 2016 3 次提交
-
-
由 Kenan Yao 提交于
-
由 Kenan Yao 提交于
If a QD crashes for reasons such as SIGSEGV, SIGKILL or PANIC, postmaster reset fails sometimes. The root cause is: postmaster would first tell child processes to exit, and then wait for the termination of important processes such as AutoVacuum, BgWriter, CheckPoint etc, before it resets share memory and restarts auxiliary processes; however, WAL writer process is missed in the waiting list, so it can happen that postmaster spawns StartupProcess and then notices the exit of WAL writer, so it tells StartupProcess to exit; then postmaster would notice the abnormal exit of StartupProcess in turn, and treats it as recovery failure, then call exit() itself. Thus, we end up with no postmaster process on master node at all. This happens almost everytime when master host machine has poor performance.
-
由 Shreedhar Hardikar 提交于
Used when compiling generated code. EXPLAIN codegen also runs optimize with this optimize level, making it easier to see the features the compiler optimizes.
-
- 30 8月, 2016 1 次提交
-
-
由 Pengzhou Tang 提交于
standardize compile flags rules of pg_basebackup with other utils like psql, pg_ctl etc.
-
- 29 8月, 2016 19 次提交
-
-
由 Pengzhou Tang 提交于
isSimplyUpdatableRelation may get an invalid input from statements like "DECLARE XX CURSOR FOR SELECT * FROM LOWER('HH')" where a function is selected.
-
由 Pengzhou Tang 提交于
1. Remove retry mechanism for reader gang and non "in recovery mode" error, gp_segment_connect_timeout is default set to 10 mins, so it should be long enough to say we temporary lost the segments. 2. Fix "in recovery mode" retry mechanism, original codes can not recognize a in-recovery-mode error. 3. Add failure details. "failed to acquire resources on one or more segments" hide too many details.
-
由 Pengzhou Tang 提交于
1. Remove retry mechanism for reader gang and non "in recovery mode" error, gp_segment_connect_timeout is default set to 10 mins, so it should be long enough to say we temporary lost the segments. 2. Fix "in recovery mode" retry mechanism, original codes can not recognize a in-recovery-mode error. 3. Add failure details. "failed to acquire resources on one or more segments" hide too many details. 4. Only destroy all gangs when create writer gang failed, otherwise it may clean cursor opened gangs and cause unexpected error.
-
由 Pengzhou Tang 提交于
1.Fix primary writer gang leak: accidentally set PrimaryWriterGang to NULL which cause disconnectAndDestroyAllGangs() can not destroy primary writer gang. 2.Fix gang leak: when creating gang, if retry count exceed the limitation, forget to destroy the failed gang. 3.Remove duplicate sanity check before dispatchCommand(). 4.Remove unnecessary error-out when a broken Gang is no longer needed. 5.Fix thread leak problem 6.Enhance error handling for cdbdisp_finishCommand
-
由 Kuien Liu 提交于
The regression was executed serially from the beginning of construction, as time going, more tests are accumulated, which consume much time on Concourse/Pipeline. So we modify these tests and run them with parallel schedule.
-
由 Peifeng Qiu 提交于
Add regression tests on writing lots of files onto s3, join query between local table and s3 external table, mixed data format query (with different data format, CSV and TEXT, or TEXT with different delimiters). Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Haozhou Wang 提交于
1. change upload option -u -f to -u 2. update the usage help message 3. update gpcheckcloud regression tests Signed-off-by: Kuien Liu, Peifeng Qiu
-
由 Adam Lee 提交于
update the case which reads 5120 small files to read 2001 files.
-
由 Kuien Liu 提交于
Update gpcheckcloud configure template with autocompress, and clean some codes and comments. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Kuien Liu 提交于
If 'autocompress' in s3 configure file is set to 'true', all data will be compressed before uploaded to S3. In this way, we can reduce the network traffic significantly, which means money-saving as well. The data will be compressed in 'NO_FLUSH' way before injected into underlayer S3KeyWriter's buffer, and the latter will invoke RESTFul layer to finish data uploading. We don't buffer data issued from s3extprotocol in compression layer, that is, all data blocks will be injected into ZStream to deflate immediately. It is because experimental results show little performance improvement by doing this while consuming more memory. Signed-off-by: Peifeng Qiu, Adam Lee
-
由 Haozhou Wang 提交于
1. Support "CANCEL" during uploading, the uploaded parts in S3 will be safely deleted. 2. The s3ext will not retry 3 times after the query is canceled by users. 3. Error messages are fixed. Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
Support downloading and uploading with special characters in URL, such as "?&=:+". Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Kuien Liu 提交于
The extension of s3 files for writing is determined by the format of data source (from FORMAT clause) to replace the default 'data'. For example, file with '*.csv' is more friendly for users. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Adam Lee 提交于
1, fixed a bug in POST method 2, support uploading in gpcheckcloud 3, add regression cases for gpcheckcloud uploading Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Peifeng Qiu 提交于
add a set of url related utilites, such as getSchemaFromURL(), getRegionFromURL(), getBucketFromURL(), getPrefixFromURL(), replaceSchemaFromURL(), which are used in both s3writers and s3readers. merge URL parser and URL utilities into an unique s3url files. Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Peifeng Qiu 提交于
-
由 Peifeng Qiu 提交于
1. Refactor ListBucket function of S3Service, get rid of all pointers. Throw exception if listBucket() fails. 2. Refactor XML response message extraction code. 3. Fix unit tests. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Adam Lee 提交于
Replace path of WET tests as a date based random string, so we are able to run WET regression cases on different pipelines simultaneously. Also could configure s3 config file's path in Makefile now, which was hard coded in sql files. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
- 28 8月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
-
- 26 8月, 2016 8 次提交
-
-
由 Heikki Linnakangas 提交于
Gcc 6.1 complains about "tautological compare". Per the comment, the intention here is to unconditionally fail the assertion, so use a more straightforward Assert(false) to do that.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
While the compiler will do a perfectly good job of optimizing this the function doesn't add anything to readability of the code given how trivial the function is. Remove and inline the init code.
-
由 Daniel Gustafsson 提交于
The UPD signalling code was added due to a bug in the pthreads code in OS X 10.6 Snow Leopard which prevent pthread_cond_timedwait() to work as intended. This version of the OS was discontinued in 2011 so it's time to retire this work around. OS X is not a supported GPDB platform and as such we only make a best effort to run it on the currently latest version. In the process also make OS X use _timedwait() and not the relative function for simplicity.
-
由 Heikki Linnakangas 提交于
Commit 1eeea564 forgot to add the test queries to the ORCA-specific expected output file, causing the test to fail. But on closer inspection, there aren't any real differences between the ORCA and non-ORCA outputs, so let's just remove the alternative file.
-
由 Heikki Linnakangas 提交于
This seems to have been harmless, by pure chance. Passing the 0 (false) instead of -1 as the location would only affect the context information in error messages. Passing -1 as the boolean 'include_dropped' variable makes expandRTE to include dropped columns in the returned list, but that seems to be harmless too, given what the caller uses the list for. Nevertheless, it's clearly a bug, so fix it.
-
由 Heikki Linnakangas 提交于
Seems better to be precise with these. AOCS tables already used int8s for these, so this makes things more consistent, too.
-
- 25 8月, 2016 8 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Whitespace and comment fixes, to follow the usual project style. Remove duplicated function comments between the .h and .c files. Per the usual project convention, all explanations of a function and its arguments are in the .c file, and the .h file only contains the prototypes. There were some additional comments about the "sections" of the files that seemed useful, but were only in the .h files. I moved those to the .c files instead.
-
由 Heikki Linnakangas 提交于
Turns out that commit 6c025b52 subtly changed the CRC calculation. The old crc32cFinish() inline function returned the final checksum, while the new FIN_CRC32C() macro modifies the variable in place. The old calls to crc32cFinish() discarded the return value, and were therefore in fact no-ops. That was surely not intentional, but it doesn't make any difference to the strength of the checksum, so it doesn't seem worth changing from previous releases.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Looking at old git history, this was added back in 2009. The related ticket on adding it said: Add GUC that make every buffer page from the buffer pool flushed on eviction. Note that this will NOT necessarily flush all buffer pages when the postmaster is shutdown. I think this is acceptable for our purposes. (Our purpose is to make sure that overwrites of the buffer pages are not lost and instead are always written to disk so we can catch errors) I'm not sure what errors that was meant to catch, or how, but I don't think we have any regression tests or anything that uses it anymore. Let's remove it, to make merging with upstream easier.
-
由 Heikki Linnakangas 提交于
These were only used by CaQL. I didn't realize that earlier..
-
由 Daniel Gustafsson 提交于
This actually returns 6 rows and not 5, not sure why that hasn't triggered before but the reorganisation of gpdiff uncovered it.
-
由 Daniel Gustafsson 提交于
The variadic and default parameters tests are specific to GPDB so move them to our own regress schedule file.
-