- 30 1月, 2019 16 次提交
-
-
由 Richard Guo 提交于
For LASJ join, the result is supposed to be empty if there is NULL in the inner side. To check for the NULLness, the join clauses are split into outer and inner argument values so that we can evaluate those subexpressions separately. This patch adds verification when doing extraction that the join clauses are in the format of 'foo = ANY bar' and that the equality operation is strict. This patch fixes issue #6389, in which the equality operator is implemented by a function. In this case, the length of arguments is one. So when it tries to extract the second argument, it refers to an invalid pointer and gets segfaulted. Reviewed-by: NEkta Khanna <ekhanna@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Karen Huddleston 提交于
We are no longer consuming python from Ivy. We are now building it ourselves against the version of OpenSSL provided on the OS. Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 David Sharp 提交于
This commit updates the extensions tarball to exclude krb5, and drops other dependencies from ivy.xml as well. We have shifted these dependencies to be included in our build images. Removed libs: - krb5 - openssl - curl - python - openldap These all have to be removed together because we cannot easily link against multiple versions of the same library and the SOVERSION of the OpenSSL installed on centos7 differs from the one fetched via ivy. As of this commit we no longer package libldap .so files with GPDB, on CentOS only. Soon, we will make the same change for SLES and Ubuntu, and the conditional for Linux_LOADERS_LIBS will no longer be necessary. Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
由 Bradford D. Boyle 提交于
This drops unused libraries and programs from the extensions tarball. Unused dependencies that were dropped: - clapack - gimli - json-c - net-snmp - pcre Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
由 Jacob Champion 提交于
These constants, all related to file replication in some way, no longer have any uses in the codebase. Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
由 Jacob Champion 提交于
Replace both concepts with MODE_NOT_SYNC -- WALrep doesn't make any further distinction.
-
由 Jacob Champion 提交于
Most of these codes were set only by GpSegStart.__convertSegments, which has been unused since ce4d96b6. Remove that function as well. The final client of SEGSTART_ERROR_MIRRORING_FAILURE, gpstart, has been simplified; the concept of a "mirroring failure" has not been supported since the removal of filerep.
-
由 Jacob Champion 提交于
- happy path - mirrors are marked down - mirrors are dead but marked up Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Shoaib Lari 提交于
We no longer have a filerep-based API to be able to retrieve a mirror's version. Instead, have the postmaster append a known string (POSTMASTER_MIRROR_VERSION_DETAIL_MSG) to the detail message when a client attempts to connect to a mirror. gpstate will then look for this string to determine a mirror's version. (This is similar to the current practice of returning replication state in the detail message.) Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Jacob Champion 提交于
These steps relied on has_process_eventually_stopped() to tell them when a process of a given PID had finally exited. Unfortunately that function doesn't take a PID -- it takes a process name and passes that to pgrep. Replace the implementation here with one that supports PIDs. (And bring the default timeout down from 2 minutes; there's no reason a process kill should take that long.)
-
由 Jacob Champion 提交于
Acceptance tests for gpstate were lacking, making it difficult to prove that the previous commit didn't break anything. Start remedying that here -- these tests are not exhaustive by any means, but they're a good start. The new Concourse job is called "gpstate". The new (or modified) behave steps are as follows: - "user kills all {type} processes [with {signal}]" This step previously worked only with primary processes, and would only send SIGKILL. It now allows either primary or mirror to be specified, along with the specific signal to be sent. If no signal is specified, SIGTERM will be sent. - "a standard local demo cluster is created" This step creates a standard 3 content demo cluster with mirrors and master standby. - "a standard local demo cluster is running" This step checks that a standard demo cluster as defined above is currently running. If it is not, it creates one. This leads to a speed improvement in the tests that use this step. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Jacob Champion 提交于
Remove references to change tracking and resynchronization; WALrep doesn't have those concepts. Replace the majority of the `gpstate -s` implementation with queries on pg_stat_replication, which are encapsulated in the new _add_replication_info() helper. Co-authored-by: NShoaib Lari <slari@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Jacob Champion 提交于
pg_isready gives us basic information: is the segment up or down, and if it's up, is it a primary or a mirror? It can also tell whether a segment is starting or stopping, but it can't tell us *which* of those two states it's in -- only that the segment is alive but rejecting connections. Co-authored-by: NShoaib Lari <slari@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Jimmy Yih 提交于
The XLogAODropSegmentFile function was used for implementing invalid page logging for AO. In the past year, there have been significant changes after removing MMXLOG_REMOVE_FILE XLOG record type that make this function no longer needed (currently only being used by a unit test). The most relevant change is that deletion of AO segment files only happen during DROP TABLE (vacuum was changed to truncate the segment file) which makes things easier for AO to reuse the existing invalid page logging implementation. References: https://github.com/greenplum-db/gpdb/commit/175c25e8fb0494933087ff19ef29d7377e021702 https://github.com/greenplum-db/gpdb/commit/8165e1b1627b9b2f8b40c3c55eab9de3d137b70a https://github.com/greenplum-db/gpdb/commit/8838ac983c66af74d0a1422337cf92f08fbe5f2c
-
由 Jimmy Yih 提交于
The -i flag is deprecated and only available on Linux systems. On MacOS and other BSD systems, xargs does not have the -i flag. Since we do not give an argument to -i, it is basically doing -I{} so change it to be explicit and proper.
-
由 Ashwin Agrawal 提交于
Few recent failures on concourse reveal that if workers are super slow, promotion can take time and test unnecessarily flakes due to timeout. In current instance took more than 1 min for promotion to complete.
-
- 29 1月, 2019 16 次提交
-
-
由 Heikki Linnakangas 提交于
Before this, you would get warnings like this in the log at crash recovery, for every temporary file that's deleted: 2019-01-28 20:20:45.702848 EET,,,p7513,th-1570674304,,,,0,,,seg1,,,,,"WARNING","01000","could not open directory ""base/pgsql_tmp/pgsql_tmpslice1_tuplestore5876.0"": No such file or directory",,,,,,,,"pgfnames","pgfnames.c",43, To fix, backport changes PostgreSQL v11, which added the support for removing temporary directories in upstream. That is, commit dc6c4c9d, and the follow up commits 561885db and eeb3c2df. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Paul Guo 提交于
For LockRows node, if its outer plan is dummy then there will be no row for locking and thus it could be dummy. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Pengzhou Tang 提交于
Previously, we updated snapshot of newest segments configuration at the start of a global transaction and never change it until the end of global transaction even a segment is down in the middle which makes things simple. The problem is, some backends, like FTS & GDD, are never a part of a distributed transaction, so they are missing the chance of updating segments snapshot. FTS is not problematic now because it explicitly destroys the segments snapshot and gets a new one every iteration to always resolve newest hostnames, GDD and other potential backends still be problematic. After a thought, there is no harm to update segments snapshot even for a local transaction except that we need to take care a local transaction that started without a database be selected. Another idea is just letting GDD to do a explicit update for every loop, but it would be forgettable when a similar backend is added.
-
由 dyozie 提交于
-
由 Mel Kiyama 提交于
* docs - remove gptransfer from docs --removed gptransfer topics, references to gptransfer, and images. --also updated text in gpcopy-migrate as rough update for 6.0 * docs - remove gptransfer from docs - review updates
-
由 Chuck Litzell 提交于
* docs - REPEATABLE READ xact mode is supported. SERIALIZABLE falls back to REPEATABLE READ. * Note that GPDB doesn't implement PGSQL SSI transactions * Review comments
-
由 Mel Kiyama 提交于
* docs - updates for online expand * docs - online expand - edits based on review comments. updated catalog table information. removed draft comments.
-
由 Bradford D. Boyle 提交于
Previously added these to the task, but missed adding them to the pipeline. Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Bradford D. Boyle 提交于
Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 David Sharp 提交于
And configure GPDB with --with-quicklz on RHEL This commit removes quicklz_compressor from all platforms except RHEL/Centos. Other platforms will be re-enable in the future. Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 David Sharp 提交于
Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 Jacob Champion 提交于
A character transposition in the getopt_long() string meant that the option meant for -S was being applied to -R: pg_rewind: option requires an argument -- R Fix that.
-
由 David Yozie 提交于
* Add notes to qualify lack of large object support. * Replacing large object nonsupport note with more general description and link to postgresql docs
-
由 David Yozie 提交于
* update pg_class relkind entries * Remove duplicate entry for composite type * Add info for missing columns: reloftype, relallvisible, relpersistence, relhastriggers
-
由 Heikki Linnakangas 提交于
The point of this FIXME was that the code before the 9.2 merge was possibly broken, because it was missing this code to get the input slot. I think it was missing before the 9.2 merge, because of a bungled merge of commit 7fc0f062, during the 9.0 merge, but now the code in GPDB master is identical to upstream, and there's nothing to do. Also, comparing the 8.2 and 5X_STABLE code, it looks correct in 5X_STABLE, as well, so there's nothing to do there either.
-
由 Heikki Linnakangas 提交于
This is a backport of upstream commit 9556aa01, and Tom Lane's follow up commit 6119060d. Cherry-picked it now, to avoid the 256 MB limit on strings. We used to have an old workaround for that issue in GPDB, but lost it as part of the 9.1 merge. Upstream commit: commit 9556aa01 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Fri Jan 25 16:25:05 2019 +0200 Use single-byte Boyer-Moore-Horspool search even with multibyte encodings. The old implementation first converted the input strings to arrays of wchars, and performed the conversion on those. However, the conversion is expensive, and for a large input string, consumes a lot of memory. Allocating the large arrays also meant that these functions could not be used on strings larger 1 GB / pg_encoding_max_length() (256 MB for UTF-8). Avoid the conversion, and instead use the single-byte algorithm even with multibyte encodings. That can get fooled, if there is a matching byte sequence in the middle of a multi-byte character, so to eliminate false positives like that, we verify any matches by walking the string character by character with pg_mblen(). Also, if the caller needs the position of the match, as a character-offset, we also need to walk the string to count the characters. Performance testing shows that walking the whole string with pg_mblen() is somewhat slower than converting the whole string to wchars. It's still often a win, though, because we don't need to do it if there is no match, and even when there is, we only need to walk up to the point where the match is, not the whole string. Even in the worst case, there would be room for optimization: Much of the CPU time in the current loop with pg_mblen() is function call overhead, and could be improved by inlining pg_mblen() and/or the encoding-specific mblen() functions. But I didn't attempt to do that as part of this patch. Most of the callers of text_position_setup/next functions were actually not interested in the position of the match, counted in characters. To cater for them, refactor the text_position_next() interface into two parts: searching for the next match (text_position_next()), and returning the current match's position as a pointer (text_position_get_match_ptr()) or as a character offset (text_position_get_match_pos()). Getting the pointer to the match is a more convenient API for many callers, and with UTF-8, it allows skipping the character-walking step altogether, because UTF-8 can't have false matches even when treated like raw byte strings. Reviewed-by: John Naylor Discussion: https://www.postgresql.org/message-id/3173d989-bc1c-fc8a-3b69-f24246f73876%40iki.fi
-
- 28 1月, 2019 3 次提交
- 27 1月, 2019 2 次提交
-
-
由 Daniel Gustafsson 提交于
The planner wasn't correctly anchoring the path tree for queries which included multiple recursive CTE self-referential terms. Fix by anchoring to the appropriate parent root when invoking the sub- query planner. Adds a testcase illustrating the query, previously the test query would error with: ERROR: could not find CTE "x" (allpaths.c:<lineno>) Co-authored-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit cd733c64 removed the TINC tests from the CI pipeline, but these files were seemingly left behind and made dead code. Remove. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 26 1月, 2019 3 次提交
-
-
由 Heikki Linnakangas 提交于
After commit 56bb376c, \di no longer prints the Storage column. I failed to change the 'bfv_partition' test's expected output accordingly.
-
由 Heikki Linnakangas 提交于
The 'translate_columns' array must be larger than the number of columns in the result set, being passed printQuery(). We had added one column, "Storage", in GPDB, so we must make the array larger, too. This is a bit fragile, and would go wrong, if there were any translated columns after the GPDB-added column. But there isn't, and we don't really do translation in GPDB, anyway, so this seems good enough. The Storage column isn't actually interesting for indexes, so omit it for \di. Add a bunch of tests. For the \di+ that was hitting the assertion, as well as \d commands, to test the Storage column. Fixes github issue https://github.com/greenplum-db/gpdb/issues/6792Reviewed-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Ashwin Agrawal 提交于
icw_gporca_centos6 job generates the icw_gporca_centos6_dump. gpexpand has icw_gporca_centos6_dump as input, hence make it just depend on that particular job instead of all the ICW jobs. This makes the gpexpand job same as pg_upgrade job. Also, importantly marks the real dependency instead of perceived one.
-