- 13 3月, 2017 3 次提交
-
-
由 Adam Lee 提交于
It's safe actually because the functions we created have two arguments at most. Fix it anyway in case three arguments functions are introduced in future. /tmp/build/0e1b53a0/gpdb_src/contrib/dblink/dblink.c: 241 in dblink_connect() ________________________________________________________________________________________________________ *** CID 163942: Null pointer dereferences (FORWARD_NULL) /tmp/build/0e1b53a0/gpdb_src/contrib/dblink/dblink.c: 241 in dblink_connect() 235 * Create a persistent connection to another database 236 */ 237 PG_FUNCTION_INFO_V1(dblink_connect); 238 Datum 239 dblink_connect(PG_FUNCTION_ARGS) 240 { >>> CID 163942: Null pointer dereferences (FORWARD_NULL) >>> Assigning: "connstr" = "NULL". 241 char *connstr = NULL; 242 char *connname = NULL; 243 char *msg; 244 PGconn *conn = NULL; 245 remoteConn *rconn = NULL; 246 Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Adam Lee 提交于
commit cae7ad90 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Mon Sep 22 13:55:14 2008 +0000 Fix dblink_connect() so that it verifies that a password is supplied in the conninfo string *before* trying to connect to the remote server, not after. As pointed out by Marko Kreen, in certain not-very-plausible situations this could result in sending a password from the postgres user's .pgpass file, or other places that non-superusers shouldn't have access to, to an untrustworthy remote server. The cleanest fix seems to be to expose libpq's conninfo-string-parsing code so that dblink can check for a password option without duplicating the parsing logic. Joe Conway, with a little cleanup by Tom Lane
-
由 Heikki Linnakangas 提交于
Some notable differences: * For the last "appendonly_verify_block_checksums_co" test, don't use a compressed table. That seems fragile. * In the "appendonly_verify_block_checksums_co" test, be more explicit in what is replaced. It used to scan backwards from end of file for the byte 'a', but we didn't explicitly include that byte in the test data. What actually gets replaced depends heavily on how integers are encoded. (And the table was compressed too, which made it even more indeterministic.) In the rewritten test, we replace the string 'xyz', and we use a text field that contains that string as the table data. * Don't restore the original table file after corrupting. That seemed uninteresting to test. Presumably the table was OK before we corrupted it, so surely it's still OK after restoring it back. In theory, there could be a problem if the file's corrupt contents were cached somewhere, but we don't cache AO tables, and I'm not sure what we'd try to prove by testing that, anway, because swapping the file while the system is active is surely not supported. * The old script checked that the output when a corrupt table was SELECTed from contained the string "ERROR: Header checksum does not match". However, half of the tests actually printed a different error, "Block checksum does not match". It turns out that the way the old select_table function asserted the result of the grep command was wrong. It should've done "assert(int(result) > 0), ..." rather than just "assert(result > 0), ...". As written, it always passed, even if there was no ERROR in the output. The rewritten test does not have that bug.
-
- 12 3月, 2017 2 次提交
-
-
由 Ashwin Agrawal 提交于
Tests stops standby and checks if walsender is gone along with other checks, but walsender may take some time to go away, hence add retry for checking. This makes test robust, as failurs were seen some times in CI.
-
由 Ashwin Agrawal 提交于
Tests deleted the standby base directory and checked promote should fail after it. It would be MAGIC if that succeeds and even if does its good then, ideally no reason for it to go through hence deleting the test.
-
- 11 3月, 2017 10 次提交
-
-
由 Daniel Gustafsson 提交于
We rely on the catalog Oid being present to be able to search for preassigned Oids to dispatch, so enforce the catalog relation Oid and error out if missing.
-
由 Abhijit Subramanya 提交于
These targets are not invoked directly anywhere so remove them.
-
由 Abhijit Subramanya 提交于
The fault `twophase_transaction_commit_prepared` would cause the segment to PANIC while writing a `COMMIT PREPARED` record on the segment. The master would then retry the `COMMIT PREPARED` while the postmaster was still resetting on the segment causing the master itself to `PANIC`. This patch introduces a new fault injector `finish_prepared_before_commit` which will not cause a `PANIC`, but just error out since the intent of the test is to ensure that the retry works correctly.
-
由 Shoaib Lari 提交于
The `with time zone` functionality is tested in several other storage test suites, such as those in access_methods, aoco_alter, aoco_compression, pg_two_phase etc. Therefore, we are removing the `with time zone` columns form the storage/walrepl/crash tests.
-
由 Chris Hajas 提交于
DDBoost 3.0 will be end of support soon so we want to use the newer version. Authors: Chris Hajas, Karen Huddleston, and Todd Sedano
-
由 Ashwin Agrawal 提交于
[ci skip]
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
This test stage ran filerep_end_to_end test suite work-load and peformed crash testing with the same. Takes around 2 hours for completion, It runs with extremely heavy lifting things like gpfdist, resource queues, etc... which don't seem of too much interest to crash testing. In long run really need much simplfied crash recovery testing by creating wrk load to create all kinds of xlog records in system and recover with it instead of piggy-backing for work-loads on these already heavy test jobs.
-
由 Asim R P 提交于
This applies to tests that create a database with checkpoint skip fault injected. Create database now requests checkpoint and if it is skipped, any modifications to the template database pending to be written are potentially lost. The database template0 is guaranteed to not have any modifications because user connections are not allowed on it so use the same for testing. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Asim R P 提交于
Also, update some answer files. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 10 3月, 2017 15 次提交
-
-
由 Chuck Litzell 提交于
* Add note about updatable cursor restriction * Update/delete a single row. Remove mystery <image> tag. * Remove suggestion that cursors can be updated with a dynamic PL/pgSQL execute statement. * Edit out passive voice.
-
由 Daniel Gustafsson 提交于
The count(*) on exttab_cte_1 errorlog has failed without explanation, and without being reproducible, both locally and on the CI pipeline. Since the errorlog might hold insights into what went wrong, print it in an ignored block to preserve it for debugging before it's lost. This will include the errorlog output prefixed with GP_IGNORE in the regression.diffs file.
-
由 Heikki Linnakangas 提交于
We have tons of tests on creating AO and AOCO tables in the regular regression suite, with and without checksum. And a lot of tests on inserts to AO/AOCO tests. There are also tests on having indexes on tables, including AO/AOCO tests. I don't see anything interesting in these particular combinations.
-
由 Daniel Gustafsson 提交于
Blocks started by start_matchignore and start_matchsubs could be closed by either end_ tag, and only the closing command determined how the content was parsed/used. This could lead to subtle issues as only a more-or-less silent warning was issued in the diff. Instead, save state from the start_ and match the end_ commands to ensure they match and fail if they don't.
-
由 Daniel Gustafsson 提交于
MetaTrackAddUpdInternal() takes a HeapTuple as argument, which in turn is defined as a pointer to a HeapTupleData structure. Pass NULL rather than InvalidOid as that gets treated as a null pointer constant causing a compiler warning: heap.c:750:13: warning: expression which evaluates to zero treated as a null pointer constant of type 'HeapTuple' (aka 'struct HeapTupleData *') [-Wnon-literal-null-conversion]
-
由 Heikki Linnakangas 提交于
The old cdbfast test suite had tests for these. Seems like a good thing to test.
-
由 Heikki Linnakangas 提交于
These exact same tests were already in the 'external_table' and 'external_table_create_privs' tests in the main test suite. And they're not specific to the gpfdist protocol, so indeed belong in the main test suite, not here.
-
由 Heikki Linnakangas 提交于
All the readable tests are moved to a new regression suite in contrib/formatter_fixedwidth, next to the source. Writable tests, however are added to gpfdist's 'custom_format' test, which already contained one writable test like this (query07, in the old test suite). Some minor changes to the original tests: * Use file:// urls instead of gpfdist:// urls, for the readable tests. That's simpler, as you don't need a running gpfdist server. * Left out a couple of redundant tests. Queries 6 and 7 were already in gpfdist's 'custom_format' test suite. I kept the original numbering, so that you can relate to the old test suite. That will become obsolete as soon as this is committed, but might be useful for reviewing.
-
由 Jingyi Mei 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Tom Meyer 提交于
We do this to ensure all PRs will get picked up by the pipeline. Because of a Concourse bug with specifying `version: every`, we are not able to have a conventional pipeline with separate jobs. Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Christopher Hajas 提交于
Signed-off-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Christopher Hajas 提交于
-
由 Roman Shaposhnik 提交于
-
由 Roman Shaposhnik 提交于
This is a backport of: From 4893ccd0 Mon Sep 17 00:00:00 2001 From: Robert Haas <rhaas@postgresql.org> Date: Sun, 6 Jul 2014 14:52:25 -0400 Subject: [PATCH] Remove swpb-based spinlock implementation for ARMv5 and earlier. Per recent analysis by Andres Freund, this implementation is in fact unsafe, because ARMv5 has weak memory ordering, which means tha the CPU could move loads or stores across the volatile store performed by the default S_UNLOCK. We could try to fix this, but have no ARMv5 hardware to test on, so removing support seems better. We can still support ARMv5 systems on GCC versions new enough to have built-in atomics support for this platform, and can also re-add support for the old way if someone has hardware that can be used to test a fix. However, since the requirement to use a relatively-new GCC hasn't been an issue for ARMv6 or ARMv7, which lack the swpb instruction altogether, perhaps it won't be an issue for ARMv5 either.
-
由 David Sharp 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
- 09 3月, 2017 10 次提交
-
-
由 Daniel Gustafsson 提交于
A collection typo fixes that were lying around. [ci skip]
-
由 Chris Hajas 提交于
- Update sql/answer files to set timezone to PST. - This prevents test failures if the tests are run on a machine that has a different timezone. Authors: Chris Hajas and Karen Huddleston
-
由 Christopher Hajas 提交于
Authors: Chris Hajas and Todd Sedano
-
由 Christopher Hajas 提交于
read_bytes will be negative if read() returns a negative value on error. Therefore, this should be a signed int.
-
由 Christopher Hajas 提交于
This is a backup-breaking failure and the dump should exit rather than simply breaking from the loop.
-
由 Ashwin Agrawal 提交于
These .sql files assumed that the timezone for machines running the tests would have the timezone as PST/PDT. This commit explicitly sets the timezone to PST in the .sql files and updates the .ans files to handle this new behavior.
-
This resolves #1880. Thanks Lirong Jian for the inital PR #1952. Details: Fixed a typo of string comparison in cdbexplain Fix the typo for UNINITIALIZED_SORT. Add new method to convert sort space type str to enum.
-
由 Heikki Linnakangas 提交于
These were not that exciting tests, because ORCA falls back to the Postgres planner for these queries. But it's a reasonable test case of partition pruning in general, after adding a WHERE clauses to the query, so add one such test case to the main test suite.
-
由 Heikki Linnakangas 提交于
These tests are from the legacy cdbfast test suite, from writable_ext_tbl/test_* files. They're not very exciting, and we already had tests for some variants of these commands, but looking at the way all these privileges are handled in the code, perhaps it's indeed best to have tests for many different combinations. The tests run in under a second, so it's not too much of a burden. Compared to the original tests, I removed the SELECTs and INSERTs that tested that you can also read/write the external tables, after successfully creating one. Those don't seem very useful, they all basically test that the owner of an external table can read/write it. By getting rid of those statements, these tests don't need a live gpfdist server to be running, which makes them a lot simpler and faster to run. Also move the existing tests on these privileges from the 'external_table' test to the new file.
-
由 Shoaib Lari 提交于
The gpcheckcat 'inconsistent' check should skip the pg_index.indcheckxmin column for now as due to the HOT feature, the value of this column can be different between the master and the segments. A more long term fix will resolve the underlying issue that makes the indcheckxmin column value to be different between the master and the segments.
-