- 18 2月, 2017 12 次提交
-
-
由 Heikki Linnakangas 提交于
These tests merely test that "gpinitactivestandby --help" and "gpinitstandby --help" produce a particular output. That's not very interesting, --help seems highly unlikely to suddenly stop working. Also, memorizing the exact help output in a test adds a hurdle to keeping the --help text up-to-date - you will need to remember to update the test too, whenever you update the help text. What would be useful, is a test to check that --help covers all the options and provides useful advice to a DBA. But that cannot be automated.
-
由 Karen Huddleston 提交于
* Some tests in concourse are failing because they take just over 15 minutes to provision pulse machines. This time increase should help make those tests more consistent.
-
Commit d09cf2e6 introduced proc_exit(0) based on the check that postmaster is not alive and the segment is at fault. The check for postmaster being alive is incorrect when passed true for `PostmasterIsAlive()` in case of resync worker as resync workers are not direct children on postmaster. Because of the incorrect check proc_exit(0) is called always for resync worker when segment is in fault. Also, the locks are not released while exiting which might cause other backends `StartTransaction()` to hang due to virtual xact lock. Fixes #1791
-
由 Heikki Linnakangas 提交于
I removed the last tests from that directory in commit 4d23ab19.
-
由 Shreedhar Hardikar 提交于
-
由 Heikki Linnakangas 提交于
The "CREATE TABLE ... AS SELECT ..." commands that were used to create the test tables choose random distribution policy when ORCA is used, while without ORCA, it chooses the first column as the distribution key. I'm not sure why, and it seems inconsistent, but work around that by using CREATE TABLE + INSERT ... SELECT to create and populate the tables instead. With CTAS, the all the rows in AO tables go to AO segfile 0, while with CREATE TABLE + separate INSERT, they go to segfile 1, so update the expected output for that.
-
由 C.J. Jameson 提交于
Signed-off-by: NChumki Roy <croy@pivotal.io>
-
由 Heikki Linnakangas 提交于
These tests need the usual "lineitem", "orders" and "suppliers" test tables, but use only a subset of the data that we have in the the data/*.csv files. Therefore, split the csv files into two parts, the "small" set, used by these new tests, and the rest.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
I tried reverting the commit that fixed the bug, but I couldn't actually reproduce the original bug anymore. But this is a very quick query, so might as well keep it.
-
由 Heikki Linnakangas 提交于
-
由 David Yozie 提交于
* adding graphics directory for adminguide * updating source to use local /graphics dir * adding missing graphics * adding ditamap for client docs; removing some extraneous landing pages; adding local graphics
-
- 17 2月, 2017 14 次提交
-
-
由 Heikki Linnakangas 提交于
I had to change some of the tests that used 4 tests rows, to have more (9), to make them pass on a 3 segment cluster. If you only insert 4 rows to an external table, some segments might not receive any rows, and reading from the same file later will fail because the file wasn't created. I also changed the queries that tested that rows are distributed evenly, to work with a different number of segments. Hook up the new tests to installcheck-world.
-
由 Heikki Linnakangas 提交于
The explain plan with ORCA spells Seq Scan as Table Scan. Add alternative expected output for that.
-
由 Heikki Linnakangas 提交于
The main ExplainAnalyzeTestCase tests in here were running standard TPC-H-like test queries, and checking that the EXPLAIN ANALYZE output contains a "Memory: " line for every plan node. That seems a bit excessive, because looking at how those lines are printed, there is no reason to believe that you might have a Memory line for some nodes but not others. I don't think we need to test for that. In order to still have minimal coverage for the explain_memory_verbosity GUC, add a small test case to the main regression suite for that. The mpp20785 test was testing for an old bug where you got an out of memory error with this query. However, the test was broken, because the test schema was nowhere to be found, so it simply resulted in a "schema not found" error. Perhaps we could put it back if we could find the original schema somewhere, but it's useless as it is.
-
-
由 David Sharp 提交于
Rely on the credentials being in the environment rather than explicitly passing them. Add a comment to document the requirement. See also: 0d840bb2
-
由 Bhuvnesh Chaudhary 提交于
When we create Left Anti Semi Join relations in case the left side is a dummy relation, it implies that the join relation is also a dummy relation. Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
由 Ashwin Agrawal 提交于
QE_READER and QE_ENTRY_DB_SINGLETON are not expected to allocate transaction IDs. They are just helper sub-processes to WRITER and should be using its xid.
-
由 Ashwin Agrawal 提交于
SET default_tablespace ideally should just be read-only transaction and shouldn't allocate xid. Currently get_tablespace_oid() is acquiring lock for other purposes to protect for concurrency, ends up assigning transaction IDs to take the lock. Setting the GUC doesn't need to acquire the lock, plus it creates the problem as multiple readers and a writer in a gang all end-up assigning independent xid when dispatched this GUC. This happens because SET runs as QE_AUTO_COMMIT_IMPLICIT for writer and hence has deferred allocation of transaction ID. The filespace ICW tests detected this case since hits assert FailedAssertion("!(!((proc->xid) != ((TransactionId) 0)))", File: "procarray.c" without the fix. Simplest scenario the problem can be demostrated is: - DROP TABLE a; - CREATE TABLE a(a INT, b INT); - INSERT INTO a VALUES (generate_series(1,2), generate_series(3,4)); -- generates 1 reader and 1 writer gang - SET default_tablespace='pg_default';
-
由 Ashwin Agrawal 提交于
The commit xlog record carries the information for objects which serves us for crash-recovery till post-commit persistent object work is done, hence cannot allow checkpoint in between. So, use inCommit flag to achieve the same. This makes it consistent with FinishPreparedTransaction() as well. Without the fix leaves out dangling persistent table entries and files on-disk for following sequence: - start transaction to drop a table - `RecordTransactionCommit()` - Checkpoint before `AtEOXact_smgr()` - Crash/Recover
-
由 Chuck Litzell 提交于
* docs: add Enum datatype * docs: add a release note for enum datatype * remove optimizer support statement.
-
由 Jimmy Yih 提交于
The cs-aoco-compression job was added with intention to run on a CRON instead of on every commit batch. However, I didn't notice that the aggregate it points to contains a commit trigger... so the project needs its own aggregate. This aggregate can be turned into a reference later if any other test jobs become CRON triggered.
-
由 Venkatesh Raghavan 提交于
ANALYZE should only produce leaf partitions stats when gucs optimizer_analyze_midlevel_partition and optimizer_analyze_root_partition are set to off This fix makes the behaviour of ANALYZE consistent with the intent of the GUCs.
-
This commit also resolves #1646. Porting only the function `tuplesort_get_stats` in tuplesort.c from the Postgres commit https://github.com/postgres/postgres/blob/9bd27b7c9e998087f390774bd0f43916813a2847/src/backend/utils/sort/tuplesort.c Link to gpdb-dev discussion - https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/V-zIshnNzyE
-
由 Jesse Zhang 提交于
This commit adds back `resultRelations` to the out func for `PlannedStmt`. The omission affects `debug_print_plan` and `explain verbose` (Until 8.4+ I guess) This seems to be an unintended omission introduced during the back-and-forth between 4e01c159 and 35f25e89.
-
- 16 2月, 2017 14 次提交
-
-
由 Pengzhou Tang 提交于
This commit reverted the recreation logic for primary writer gang involved in commit e28c84b2 which make some test cases got unexpected answer files, this commit also improved getAvailableGang() so it can go through the whole freelist of reader gangs until it found a healthy gang.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
This commit adds 2 C&S to start with running natively. Also, adds logic to print out diffs for these in case of failures for easy troubleshotting.
-
由 Adam Lee 提交于
``` When you send a request, you must tell Amazon S3 which of the preceding options you have chosen in your signature calculation, by adding the x-amz-content-sha256 header with one of the following values: If you choose chunked upload options, set the header value to STREAMING-AWS4-HMAC-SHA256-PAYLOAD. If you choose to upload payload in a single chunk, set the header value to the payload checksum (signed payload option), or set the value to the literal string UNSIGNED-PAYLOAD (unsigned payload option). ``` ref: http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html We used to not hash or sign the payload, it's not quite secure with HTTPS off, which is enabled by default although. And some compatible S3 services don't support "unsigned payload option", this commit fixes it. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Venkatesh Raghavan 提交于
Issue: #1759
-
由 Chris Hajas 提交于
Replaces long running behave tests in gptransfer suite with unit tests since they're only testing argument parsing. This shortens the gptransfer suite by about 15 minutes. Authors: Chris Hajas and Jamie McAtamney
-
由 Chuck Litzell 提交于
* docs: Updatable cursors support * fix PL/gpSQL typos * Move updatable cursors release note to changed features and note that it is a clarification and doc change.
-
由 Haisheng Yuan 提交于
-
由 Heikki Linnakangas 提交于
The test output with ORDER BY was not fully defined, because there were some rows with the same key column, but different second column. Remove ORDER BY from those queries, and let gpdiff.pl mask the difference in row order instead.
-
由 Chris Hajas 提交于
Commit 20332d98 disabled the ALTER TABLE CLUSTER ON syntax. This removes those tests from the gptransfer test suite.
-
由 Heikki Linnakangas 提交于
We have most of these exact same queries in src/bin/gpfist/regress/input/exttab1.source already, and the rest are not very interesting. Some of the other queries not in exttab1.source already queries regular tables, rather than external tables, or were so similar to existing queries that they don't provide any meaningful extra coverage. The reason to attack test this right now is that commit 743b886a broke the test. It'd be trivial to just fix the expected output, but let's rather remove it altogether. (Most of the other query* files in this directory could probably be removed too, or moved to src/bin/gpfdist/regress, but I'll leave that for tomorrow.)
-
由 Venkatesh Raghavan 提交于
Make the test deterministics. Similar to https://github.com/greenplum-db/gpdb/pull/1655
-
由 Venkatesh Raghavan 提交于
partition. Analyze root partition should not generate statistics for the leaf or midlevel partitions, immaterial of the value of the GUC optimizer_analyze_midlevel_partition. This fix makes it consistent with our docs.
-
由 Lisa Owen 提交于
* updates to language create/alter/drop allowing db owners * identify pl/r and pl/python as untrusted
-