- 07 11月, 2015 1 次提交
-
-
由 Heikki Linnakangas 提交于
Netsnmp exported strlcpy, so that when the configure script checked for presence of strlcpy, it said "yes", even though it was only present when linking with -lnetsnmp. That's exactly the same problem as mentioned in configure.in for -ledit. Fix in the same way.
-
- 06 11月, 2015 13 次提交
-
-
由 Dave Cramer 提交于
-
由 Heikki Linnakangas 提交于
We don't particularly care about these translations, as they haven't been kept up-to-date in greenplum. The main reason for doing this is so that they don't show up when doing "grep -R pgsql-bugs@postgresql.org .", to find all the references that we really do care about. Patch by Andreas Scherbaum
-
由 Heikki Linnakangas 提交于
This reduces the diff footprint with upstream, which will make merging easier. I don't see any reason for adding the unique2 column to ORDER BYs. I think this must've been some kind of a misunderstanding of the test case years ago, and before gpdiff.pl was invented to mask out differences in row ordering. It was a misunderstanding even back then, though, because the row order of these queries is well-defined even without the unique2 column.
-
由 Heikki Linnakangas 提交于
This will make merging and diffing with upstream easier.
-
由 Heikki Linnakangas 提交于
There seems to have been a philosophy for bugbuster tests to not only clean up after test, but also try to drop any object that the test creates before the test, so that the test case works if it is interrupted, and you re-run it. We don't generally require that for test cases, as pg_regress always creates a fresh 'regression' database to run the tests in. In any case, dropping and recreating PL/pgSQL certainly doesn't seem appropriate, because it's installed by default.
-
由 Heikki Linnakangas 提交于
Move the GPDB-added parts of the 'case' regression test, and the bugbuster 'case' regression test, into a new test called 'case_gp'. This way the 'case' test is almost identical to upstream version, which will ease diffing and merging. And there's not much point keeping the bugbuster test separate.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
It wasn't actually testing gpcrondump, as the test name implied. It created a couple of partitioned tables, inserted rows in them, and created indexes on them. That's all. That functionality is already covered by the 'partition' test case.
-
由 Heikki Linnakangas 提交于
We have "LIMIT 0" clauses in other tests.
-
由 Heikki Linnakangas 提交于
It wasn't testing anything interesting. The comments described possibly useful test scenarios, involving concurrent transactions, but the actual queries all happened in a single backend. I think we're better off just removing these.
-
由 Heikki Linnakangas 提交于
All it did was some setup for the 'optquery' test, so might as well make 'optquery' self-contained, by moving the initialization there.
-
由 Heikki Linnakangas 提交于
They were just replaced with the constants, so might as well use the constants directly in the tests.
-
由 Heikki Linnakangas 提交于
There were no CRC or CRC32C calls anywhere in the source, so no need to link them in.
-
- 05 11月, 2015 6 次提交
-
-
由 Heikki Linnakangas 提交于
Also, rephrase the message you get from enable_xform() etc. functions, if the server has been built without ORCA. Add alternative expected output files to make the regression tests pass without --enable-orca.
-
由 Heikki Linnakangas 提交于
The readable_external_table_timeout global variable is needed even when building without libcurl, because the GUC still exists and is listed in guc.c, even though doesn't do anything without libcurl support. Move it outside the #ifdef USE_CURL block. While we're at it, silence a few "<function> defined but not used" warnings from sendalert.c. Some static functions were only called when compiling with libcurl, so put them inside #ifdef LIBCURL blocks to silence the warnings. Reported by Digoal.zhou.
-
由 Haisheng Yuan 提交于
-
由 Heikki Linnakangas 提交于
-
由 Haisheng Yuan 提交于
When user runs \d or \d+ on a partition table, it will display partition keys on the current level for the partition table with the clause of 'Partition by: (partition_key_1, ..., partition_key_n)' after 'Distributed by' clause. But if user runs \d or \d+ on a non-partition table or leaf level partition table, the 'Partition by' clause will not be displayed. Haisheng Yuan Closes #25
-
由 Ashwin Agrawal 提交于
Relation can potential be dropped while waiting to acquire lock. If cannot find the relation for "if_exists" case should just emit NOTICE and act as noop.
-
- 04 11月, 2015 5 次提交
-
-
由 Entong Shen 提交于
Exchanging a default partition is currently not allowed because there is no validation on the data being exchanged. This commit add a GUC to enable this if the user chooses to do so.
-
由 Chumki Roy 提交于
Added to easily find the gptransfer work directory.
-
由 Jimmy Yih 提交于
Fix gpssh issues by upgrading pexpect/pxssh to latest stable version 3.3. Cherrypicked pxssh v4.0 ssh options into this upgrade too. Made ssh_utils.py pxssh initial connection validations threaded to reduce overhead of new pxssh validation function. The issue is pxssh and ssh related: http://pexpect.readthedocs.org/en/latest/commonissues.html#timing-issue-with-send-and-sendline
-
由 Entong Shen 提交于
-
由 Jimmy Yih 提交于
Refactor gpcrondump and gpdbrestore utilities (along with supporting dump.py, restore.py, and backup_utils.py from gppylib) to remove DUMP_DIR and DUMP_PREFIX global variables.
-
- 03 11月, 2015 11 次提交
-
-
由 Ashwin Agrawal 提交于
Drop on segments was performed always using if exists, this may cause inconsistency in cluster if for some reason table is found on master but misses to find on segments. Patch now dispatchs DROP with 'if_exists' only when it's specified by user. Otherwise, segments will abort the transaction if the table being dropped is not found, instead of just dropping table only on master.
-
由 Abhijit Subramanya 提交于
The issue was manifested by having large file in data directory. gpinitstandby encountered SIGSEGV as tar fails during basebackup. With the fix it gracefully errors out. Patch pulls in following commit from upstream in order to improve error handling and recovery in walsenders. commit fd5942c1 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Fri Oct 5 17:13:07 2012 +0300 Use the regular main processing loop also in walsenders. The regular backend's main loop handles signal handling and error recovery better than the current WAL sender command loop does. For example, if the client hangs and a SIGTERM is received before starting streaming, the walsender will now terminate immediately, rather than hang until the connection times out.
-
由 Asim Praveen 提交于
If DROP has to wait for a lock on the relation after its OID was looked up, it may happen that the name is no longer valid by the time DROP acquires the lock. This case is handled by pulling a function from upstream commit commit 4240e429 Author: Robert Haas <rhaas@postgresql.org> Date: Fri Jul 8 22:19:30 2011 -0400 Try to acquire relation locks in RangeVarGetRelid.
-
由 Ashwin Agrawal 提交于
The patch adds size required to finalize the rle repeats array size to the block space calculation for CO, to correctly detect if current block is full and should start new block for current item. If its non-repeated item (NULL / value), need to finalize the rle repeats counts array and then store new value. Space calculation only checked for space required for new item either in NULLs array or actual item space, but missed that to store the item which is non-repeating, would have to finalize the previous repeating array which would need 4 bytes of space along with it. Hence said okay to store the item but later caused failure to insert when pushing the block to disk as its over blocksize specified for the table.
-
由 Heikki Linnakangas 提交于
It was not working correctly anyway on 32-bit x86 systems, where sizeof(int) < sizeof(Datum), because in Greenplum sizeof(Datum) == 8 even on 32-bit platforms. This was revealed by failures on the 'misc' regression test, which tested this the V0 support.
-
由 Heikki Linnakangas 提交于
The 'gp_optimizer' test also creates a table called 'sales', which started to clash with the 'sales' table created by the new 'decode_expr' test. Rename the test table used in decode_expr, and also drop it at the end of the test.
-
由 Entong Shen 提交于
-
由 Heikki Linnakangas 提交于
Not sure why ORCA causes the error, but it seems to be intended behaviour, so I'm not going to try fixing that right now.
-
由 Heikki Linnakangas 提交于
The gphdfs test was concerned with testing gphdfs. It's useless without gphdfs itself, and was causing the main regression suite to fail, if gphdfs was not installed. Likewise, move the gphdfs-related parts of exttab1 test.
-
由 Heikki Linnakangas 提交于
Looks like all the 'functional' test was testing was DECODE() expressions, so rename the test, and move it to the main test suite. While we're at it, clean up the test case to some extent. Fix the missing semicolons from some queries, which caused the subsequent DROP to fail. Remove unused functions and emp and dept tables. Add some comments explaining what each test does. Remove use of multi-byte characters from test case, so that it works regardless of encoding.
-
由 Heikki Linnakangas 提交于
Remove unnecessary ORDER BY clauses that are not present in the upstream version. Disable COPY BINARY test, as GPDB doesn't support BINARY copy. Finally, adjust the error messages in the expected output to match what you get nowadays. DROP TABLE cleanup commands have been added to some other tests, so that the extra user tables don't appear in the output of the 'misc' test. It's not very nice of the 'misc' test to list all user tables, but this nevertheless seems less likely to cause merge conflicts than ripping out that part of the test altogether. There are still differences in the list of user tables though.
-
- 02 11月, 2015 4 次提交
-
-
由 Heikki Linnakangas 提交于
Revert some unnecessary changes compared to upstream: there's no need for ORDER BYs in queries, gpdiff.pl will mask out row order differences. That needed some fixes to atmsort.pl though, to make it smarter about reordering COPY TO STDOUT results. It used to only deal with the COPY (SELECT ...) TO STDOUT variant, and particularly named tables (COPY .*test1 TO STDOUT). Make it handle all COPY TO STDOUT commands. Also make it smarter about detecting the end of COPY TO STDOUT output. In addition to -- or ERROR, also treat any SELECT, UPDATE, CREATE etc. command as the end of result. In the passing, remove the "-- copy_stdout" command from atmsort.pl, as it was unused. No need to label the functions as immutable or with NO SQL or MODIFIES SQL DATA, and update the error message in the expected output about cross-segment access within functions. One of the tests gives a different error than on upstream: an empty line gives "invalid integer" error in upstream, but "missing data" on GPDB. I'm not quite sure what's causing that, but both errors seem reasonable for that case (the "missing data" is perhaps even better), so let's just memorize that difference.
-
由 Heikki Linnakangas 提交于
The upstream 'copy2' regression test exercised this, but we haven't noticed because we've disabled that test. I spotted this while trying to re-enable it. Nevertheless, let's add an explicit test for this.
-
由 Heikki Linnakangas 提交于
Integers and numerics don't need quoting in CSV-format COPY, but they should still be quoted if the FORCE QUOTE option is used. We had memorized the incorrect output as expected output of the GPDB-added parts at the end of the 'copy' regression test. Fix that, but also add an explicit test case for this.
-
由 Heikki Linnakangas 提交于
The USE_FORCE_PLAN option has been unused for a long time.
-