- 13 2月, 2017 6 次提交
-
-
由 Daniel Gustafsson 提交于
cdb_dumpall_agent.c is neither compiled, not does it contain anything of interest. Kill it for now, if we want cdb_dumpall_agent it can be resurrected or started on from scratch.
-
由 Heikki Linnakangas 提交于
This hasn't been tested for a while. And if someone wants to build GPDB on Solaris, should use autoconf tests and upstream "#ifdef _sparc" method to guard platform-dependent code, rather than the GPDB-specific "pg_on_solaris" flag.
-
由 Heikki Linnakangas 提交于
If you did "DROP USER IF EXISTS not_there", you got a notice from every segment: postgres=# DROP USER IF EXISTS testuser; NOTICE: role "testuser" does not exist, skipping NOTICE: role "testuser" does not exist, skipping (seg0 127.0.0.1:40000 pid=1554) NOTICE: role "testuser" does not exist, skipping (seg1 127.0.0.1:40001 pid=1556) NOTICE: role "testuser" does not exist, skipping (seg2 127.0.0.1:40002 pid=1555) That was quite noisy. Suppress the notices from the segments, so that you only get one NOTICE, from the master. We had done this for all other object types that support IF EXISTS, like tables, functions, etc.
-
由 Pengzhou Tang 提交于
Add GUC_GPDB_ADDOPT flag for internalstyle so it can be dispatched to QEs
-
由 Adam Lee 提交于
-
由 Daniel Gustafsson 提交于
The tests in this suite are simply wrapping queries from other ICW suites (mainly qp_derived_table, qp_olap_window, qp_olap_mdqa) in plain CREATE VIEW statements with a SELECT * on the view. Remove as we already have the queries in testing and we have ample coverage of CREATE VIEW in the create_view suite.
-
- 12 2月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
This code was put in place back in 2007, to work around inconsistent formattin of the DETAIL lines in errors coming from external tables. From MPP-1557: > The exttab1 test fails occassionally due to strangeness in the DETAIL > formatting. Here's what happens when I select from my badly-formatted > external table. Note that for the same select statement on the same table, > the DETAIL record formatting is different: > > jc1=# select count from bad_whois ; > ERROR: missing data for column "domain_name" (seg0 slice1 localhost:11002 pid=11535) > DETAIL: External table bad_whois, line 2 of gpfdist://localhost:8080/whois.csv: "" > > jc1=# select count from bad_whois ; > ERROR: missing data for column "subdomain" (seg0 slice1 localhost:11002 pid=11535) > DETAIL: > External table bad_whois, line 3 of gpfdist://localhost:8080/whois.csv: "is provided by WEBCC for information purposes, and to assist in obtaining > information about or rela..." > > The problem is that normally the DETAIL information comes immediately on > the same line as the "DETAIL:" label, but if the DETAIL information > contains a newline, then a newline is inserted after the "DETAIL:" label. > It's a little tricky to handle this case cleanly for gpdiff comparison. > If the discrepancy is due to some weirdness in external table message > generation, it would be nice to fix, but if it is due to some strange > postgresql message formatting I can probably live with it. That doesn't seem to be a problem anymore, so remove the hack.
-
由 Heikki Linnakangas 提交于
We haven't used CVS, with its Revision tags, for a very long time.
-
由 Heikki Linnakangas 提交于
Commit fecad30b forgot to update this alternative expected output. (For when you build without ORCA)
-
- 11 2月, 2017 15 次提交
-
-
由 Heikki Linnakangas 提交于
In passing, I fixed the typo in the error message, but missed it in the expected output of these tests.
-
由 C.J. Jameson 提交于
* rename 5.0.json to 5.json * the code expects the file to be named based on the major version number, which now only has a single digit Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
In addition, delete unnecessary drop statements.
-
由 C.J. Jameson 提交于
We added make commands to clean up files that are gitignored, because now git clean won't be able to delete them. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Jimmy Yih 提交于
This commit adds TINC aoco_compression to the GPDB 5.0_MASTER pipeline as a nightly trigger job.
-
由 Jimmy Yih 提交于
The storage type for cidr and inet was changed from plain to main around the beginning of open source Greenplum but still plagues some tests. These should be the last ones inside the TINC directory. Other than that, just some random ans file updates. This commit is just to make the tests green on CI. The tests will be refactored in a later commit to run more quickly and to reduce clutter.
-
由 Ashwin Agrawal 提交于
sync_tools currently is being tar'd in compile job, uploaded to S3 and download by whole bunch of jobs. Seems not required, so lets stop performing the same and cut down time and resource wastage.
-
由 Ashwin Agrawal 提交于
Don't see rational for serializing job runs. Multiple commits should be able to run the job in parallel to help gain faster feedbacks.
-
由 Xin Zhang 提交于
In Postgres, it's not supported to have temp table as part of PREPARE TRANSACTION. Hence, the method `LockTagIsTemp()` to check whether a lock is on temp table or not got removed from upstream. It's a requirement for GPDB to allow access to temp table for MPP transactions, and all transactions go through 2PC. If we remove the `LockTagIsTemp()`, then the locks on the temp table will be captured as part of `TwoPhaseRecordOnDisk`, and held during the prepare of 2PC. That caused a concern during `xact_redo()` if segment crashed before commit. In that case, the `xact_redo()` will lock and release the locks on the temp table, which is already deleted by GPDB when handling the crash. That results in a redundant operations of lock and release on a non-existent temp table object. To prevent that from happening, we shouldn't copy the lock information on the temp table to the `TwoPhaseRecordOnDisk`. Hence, we still need the `LockTagIsTemp()` method. This commit brings it back. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Shubham Sharma 提交于
-
由 Shubham Sharma 提交于
* Changed help to reflect changes made to gpdeletesystem to avoid multiple validation for master data directory
-
由 Heikki Linnakangas 提交于
There was a mechanism, using PARALLEL_BUILD=1, to guess suitable -j and -l flags to pass to make. I tried enabling them, and it came up with options "-l3.5 -j", which doesn't seem right, and didn't speed up the build. Instead of trying to be smart, let's just hard-code "-j4" into the concourse launcher script. That's parallel enough to speed up the build considerably, but also not too parallel, so that if the system is really underpowered, it should still not harm too much.
-
由 Jamie McAtamney 提交于
This commit expands on 2ccdfbf2, as the gpversion helper functions also needed to be modified to account for the 4-digits-to-3-digits conversion; with 3 digits, the major version is 5, not 5.0. It also updates the gpmigrator_util tests to use 4.3 as the previous version, instead of 4.2, and removes some references to customers in the test version strings.
-
由 Haisheng Yuan 提交于
* Fix incorrect size of sliceMap in CdbDispatchResults. In case the slice table of a plan contains multiple roots, we created a sliceMap based on the slice capacity of only one of trees. Since sliceIndices reference the entire slice table, we ended up referencing sliceMap with sliceIndex out of bounds of the sliceCapacity. This fixes that issue by correctly calculating sliceCapacity. Refer commits that introduced this problem : [1] 4b360942 PR#827 [2] a2ecd1fa [1] Combined the calculations of resultCapcity & sliceCapcity, which is incorrect - e.g in cases that have an InitPlan [2] Removed resultCapacity considering it redundant. * Revert "Fix incorrect size of sliceMap in CdbDispatchResults." This reverts commit 8cae828809ad22aaf05ebfd77f4be35fe3614e63. * Update small changes to fix the issue
-
- 10 2月, 2017 16 次提交
-
-
由 Dave Cramer 提交于
* Align the version number with the current 5.0. Currently gppkg fails due to the server being version 5.0 and getversion not understanding it. * MAIN_VERSION uses 3 numbers now, add test for 5.0 * fix formatting * attempt to deal with versions prior to 5 which expected 4 version numbers, versions after 4 expect 3 version numbers if the major version is below 5 then append 99,99 to versions shorter than 4 digits if the major version is greater than 4 then append 99 to versions shorter than 3 digits
-
由 Heikki Linnakangas 提交于
Instead of having a duplicate typedef in url.h and relscan.h, use "struct URL_FILE" rather than just "URL_FILE" in FileScanDescData. We handled CopyStateData in the same struct like this already. This silences compiler warning that Daniel Gustafsson reported offlist, when building with clang with -Wtypedef-redefinition.
-
由 Daniel Gustafsson 提交于
The tests in the Security suite are for the most part already covered in the ICW role and auth_constraints suites, the remaining ones of interest were ported so the suite can retired from Bugbuster.
-
由 Daniel Gustafsson 提交于
gp_enable_inheritance hasn't existed for a long time, setting it and getting a very expected ERROR is uninteresting in a test.
-
由 Daniel Gustafsson 提交于
Passing in abs_srcdir for Bugbuster made pg_regress parse and create the input/output files for ICW rather than Bugbuster cluttering up the sql/expected dirs with unused files. This went unnoticed as there were not actual input/output files in Bugbuster. Set correct path to shave some time (and diskspace) off testruns.
-
由 Daniel Gustafsson 提交于
pg_regress is fine with not having input/output directories at all, no need to place dummy empty files there.
-
由 Daniel Gustafsson 提交于
These test sources have not been in the run schedule since they were imported into the tree a long time ago, seemingly in favor of the AOCO_Compression2 suite which seems to have originated as a copy of AOCO_Compression. Judging by the recorded output for aoco_compr_sanity, it hasn't been tested for some time given there is a successful SET command for a GUC which is long since removed.
-
由 Daniel Gustafsson 提交于
The gpmapreduce tests are running as part of ICW and testfiles are maintained there, remove leftovers.
-
由 Heikki Linnakangas 提交于
The callers of url_fclose() didn't check the return value, and most of the implementations didn't return anything interesting anyway. Mark it as void. The implementations are expected to elog() an error instead, if something goes wrong. Merge the 'size' and 'nmemb' parameters to url_fread and url_write to just one 'size' parameter. There's no need to precisely emulate the fread/fwrite functions here. The callers just always passed size == 1, and there was even an Assert(size == 1) in url_file_fread() implementation. The implementations also differed on how the return value was calculated.
-
由 Heikki Linnakangas 提交于
Makes it shorter, nicer to use, and less error-prone.
-
由 Heikki Linnakangas 提交于
Use the public API instead.
-
由 Heikki Linnakangas 提交于
url.c contained four different flavors of external tables: * "web", using libcurl * "file", for reading files directly on the server * "execute", for launching a command and reading its output. * "custom", where a custom protocol function does all the work. It was hard to tell which functions are required by which flavor. To clarify this, split up url.c into multiple files. What remains in url.c, is just code to "dispatch" the url_* calls to the correct implementation, depending on the flavor. All the actual work is done in flavor-specific files.
-
由 Heikki Linnakangas 提交于
They are two very different things. Let's use URL_FILE variables and fields everywhere, instead of liberally casting between FILE and URL_FILE.
-
由 Heikki Linnakangas 提交于
* Remove some unnecessary #includes * Run pgindent * Mark readheaderLine() static.
-
由 Heikki Linnakangas 提交于
-
由 Daniel Gustafsson 提交于
To avoid silent breakage in the password generation, remember the pg_authid output in the output file. The reason for the query being commented out in the first place was that different SSL libraries were being used which made the output unstable. Since we now only support one SSL library, uncomment and remove the old comment.
-