- 10 6月, 2020 1 次提交
-
- 08 6月, 2020 1 次提交
-
-
由 Hubert Zhang 提交于
When introducing a new mirror, we need two steps: 1. start mirror segment 2. update gp_segment_configuration catalog Previously gp_add_segment_mirror will be called to update the catalog, but dbid is chosen by get_availableDbId() which cannot ensure to be the same dbid in internal.auto.conf. Reported by issue9837 Reviewed-by: NPaul Guo <pguo@pivotal.io> Reviewed-by: NBhuvnesh Chaudhary <bhuvnesh2703@gmail.com> cherry-pick from commit: f7965d and 1ee999
-
- 04 6月, 2020 1 次提交
-
-
由 Wen Lin 提交于
-
- 21 5月, 2020 1 次提交
-
-
由 Wen Lin 提交于
while gpload is loading data if the configure file contains "error_table" and doesn't contain "preload", an error of no attribute "staging_table" or "fast_path" occurs.
-
- 13 5月, 2020 2 次提交
-
-
由 Ning Yu 提交于
We use "pkill postgres" to cleanup leaked segments in the behave tests, if the postgress processes already exited the pkill command would fail with code 1, "No processes matched or none of them could be signalled". Fixed by ignoring the return code of pkill. (cherry picked from commit a92e0a33)
-
- 12 5月, 2020 1 次提交
-
-
由 Peifeng Qiu 提交于
gpload in the latest windows client package requires VS redistributable package. Output more meaningful message if pg.py fails to load.
-
- 07 5月, 2020 2 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Previously, gpinitsystem did not allow the user to specify a hostname and address for each segment in the input file used with -I; it only accepted one value per segment and used it for both hostname and address. This commit changes the behavior so that the user can specify both hostname and address. If the user specifies only the address (such as by using an old config file), it will preserve the old behavior and set both hostname and address to that value. It also adds a few tests around input file parsing so SET_VAR is more resilient to further refactors. The specific changes involved are the following: 1) Change SET_VAR to be able to parse either the old format (address only) or new format (host and address) of the segment array representation. 2) Move SET_VAR from gpinitsystem to gp_bash_functions.sh and remove the redundant copy in gpcreateseg.sh. 3) Remove a hardcoded "~0" in QD_PRIMARY_ARRAY in gpinitsystem, representing a replication port value, that was left over from 5X. 4) Improve the check for the number of fields in the segment array representation. Also, Remove use of ignore warning flag and use [[ ]] the check for IGNORE_WARNING
-
由 Bhuvnesh Chaudhary 提交于
Previously, gpintsystem was incorrectly filling the hostname field of each segment in gp_segment_configuration with the segment's address. This commit changes it to correctly resolve hostnames and update the catalog accordingly. This reverts commit 12ef7352, Revert "gpinitsystem: update catalog with correct hostname". Commit message from 12ef7352: The commit requires some additional tweaks to the input file logic for backwards compatibility purposes, so we're reverting this until the full fix is ready.
-
- 29 4月, 2020 1 次提交
-
-
由 Ning Yu 提交于
The gpinitsystem will fail if the gpinitsystem logs contain errors or warnings from previous tests.
-
- 31 3月, 2020 1 次提交
-
-
由 Jamie McAtamney 提交于
This reverts commit e4add7cb. The commit requires some additional tweaks to the input file logic for backwards compatibility purposes, so we're reverting this until the full fix is ready.
-
- 18 3月, 2020 1 次提交
-
-
由 Jamie McAtamney 提交于
Previously, gpintsystem was incorrectly filling the hostname field of each segment in gp_segment_configuration with the segment's address. This commit changes it to correctly resolve hostnames and update the catalog accordingly. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
- 14 3月, 2020 2 次提交
-
-
由 Adam Berlin 提交于
gpinitsystem did not quote the username while performing ALTER USER. When the username is a numeric value the postgres parser gets upset - unless the username is quoted. See here for more details: https://www.postgresql.org/docs/9.4/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS - SQL identifiers and key words must begin with a letter (a-z, but also letters with diacritical marks and non-Latin letters) or an underscore (_). - Also, there is a second kind of identifier: the delimited identifier or quoted identifier. It is formed by enclosing an arbitrary sequence of characters in double-quotes (") - use variable interpolation provided by psql to properly quote user-provided values. - use RETVAL to perform testing due to Commit d7b7a40aCo-authored-by: NJacob Champion <pchampion@pivotal.io> (cherry picked from commit f188ecb5)
-
由 Ashuka Xue 提交于
Previously, analyzedb would error out and fail if a table was dropped during analyzedb. Now, we silently skip dropped tables when determining the tables to analyze.
-
- 13 3月, 2020 1 次提交
-
-
由 Chris Hajas 提交于
Previously, running analyzedb with an input file (`analyzedb -f <config_file`) containing a root partition would fail as we did not properly populate the list of leaf partitions. The logic in analyzedb assumes that we enumerate leaf partitions from the root partition that the user had input (either from the command line or from an input file). While we did this properly when the table was passed in from the command line, we looked for the table name rather than the schema-qualifed table for input files. This would cause partitioned heap tables to fail when writing the report/status files at the end, and would cause analyzedb to not track DML changes in partitioned AO tables. Now, we properly check for the schema-qualified table name. (cherry picked from commit d1611944)
-
- 09 3月, 2020 1 次提交
-
-
由 Bradford D. Boyle 提交于
To fix packcore test failure in sles12 Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NShaoqi Bai <sbai@pivotal.io>
-
- 02 3月, 2020 1 次提交
-
-
由 Huiliang.liu 提交于
Add max_retries flag for gpload. It indicates the max times on connecting to GPDB timed out. max_retries default value is 0 which means no retry. If max_retries is -1 or other negative value, it means retry forever. Test has been done manually. ( cherry pick from master commit: b891b85b)
-
- 27 2月, 2020 1 次提交
-
-
由 Daniel Gustafsson 提交于
This fixes multiple occurrences of duplicated words in sentences, like "the the" and "is is" etc. Backported from master 1d44a0c5Reviewed-by: NMel Kiyama <mkiyama@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
- 19 2月, 2020 2 次提交
-
-
由 Haozhou Wang 提交于
1. When two gppkg packages have the same dependencies, gppkg utility will refuse to install the second gppkg package and throw an error. This patch fixes this issue and the second gppkg package can install successfully. 2. Fix install/uninstall issue if the master and standby master use the same node address. PS: This patch is backported from the master branch
-
由 Ashwin Agrawal 提交于
`ifa_addr` may be null for interface returned by getifaddrs(). Hence, checking for the same should be perfomed, else ifaddrs crashes. As side effect to this crashing, on my ubuntu laptop gpinitstandby always fails. Interface for which `getifaddrs()` returned null for me is: gpd0: flags=4240<POINTOPOINT,NOARP,MULTICAST> mtu 1500 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 (gdb) p *list $5 = {ifa_next = 0x5555555586a8, ifa_name = 0x555555558694 "gpd0", ifa_flags = 4240, ifa_addr = 0x0, ifa_netmask = 0x0, ifa_ifu = {ifu_broadaddr = 0x0, ifu_dstaddr = 0x0}, ifa_data = 0x555555558bb8} Reviewed-by: NJacob Champion <pchampion@pivotal.io> Reviewed-by: NMark Sliva <msliva@pivotal.io>
-
- 13 2月, 2020 1 次提交
-
-
由 Asim R P 提交于
Incremental recovery and rebalance operations involve running pg_rewind against failed primaries. This patch changes gprecoverseg such that pg_rewind is invoked in parallel, using the WorkerPool interface, for each affected segment in the cluster. There is no reason to rewind segments one after the other. Fixes Github issue #9466 Reviewed by: Mark Sliva and Paul Guo (cherry picked from commit 43ad9d05)
-
- 12 2月, 2020 2 次提交
-
-
由 Jamie McAtamney 提交于
Previously, gpstart could not start the cluster if a standby master host was configured but currently down. In order to check whether the standby was supposed to be the acting master (and prevent the master from being started if that was the case), gpstart needed to access the standby host to retrieve the TimeLineID of the standby, and if the standby host was down the master would not start. This commit modifies gpstart to assume that the master host is the acting master if the standby is unreachable, so that it never gets into a state where neither the master nor the standby can be started. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io> (cherry picked from commit 29c759ab8c1f4179e46b51c91a808e76f6747075)
-
由 Kalen Krempely 提交于
Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
- 08 2月, 2020 1 次提交
-
-
由 Ashwin Agrawal 提交于
gpcheckcat hard-coded master dbid to 1 for various queries. This assumption is flawed. There is no restriction master can only have dbid 1, it can be any value. For example, failover to standby and gpcheckat is not usable with that assumption. Hence, run-time find the value of master's dbid using the info that it's content-id is always -1 and use the same. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> (cherry picked from commit d1f19ca9)
-
- 24 1月, 2020 3 次提交
-
-
由 Mark Sliva 提交于
We update the pg_hba.conf file with replication entries for each hostname/address to enable cross-subnet cluster expansion. There are no tests for this change, but they can be added at a later time. (cherry picked from commit cdd1e934) Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NDavid Krieger <dkrieger@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Mark Sliva 提交于
The four CM utilities gpinitsystem, gpinitstandby, gpaddmirrors, and gpmovemirrors now have the relevant pg_hba.conf entries to allow WAL replication to mirrors from their respective primaries across subnets. There are two parts to this commit: 1). modify the CM utilities to add the pg_hba.conf entries to allow WAL replication to mirrors across a subnet. 2). test the relevant CM utilities across subnets The previous pg_hba.conf replication entry: 'host replication $USER samenet trust' does not allow WAL replication connections across subnets. We keep this entry in order to support single-host development. We then add one replication line for each primary and mirror interface address to new primaries and mirrors to allow this. It looks like: 'host replication $USER $IP_ADDRESS trust' or when HBA_HOSTNAMES=1 'host replication $USER $HOSTNAME trust' Further, if there is ever a failover and subsequent promotion, replication connections can be made to the newly promoted primary from the host on which the previous primary failed, because those addresses get copied over to the new mirror during a pg_base_backup. We also add similar logic to support cross-subnet replication between the master and standby. This behavior is tested in the cross_subnet behave tests. The cross_subnet behave tests assert that the replication connection is valid by manually making the connection in addition to relying on segments being synchronized, as a way to ensure that the pg_hba.conf file is being used. (cherry picked from commit 79637980) Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NDavid Krieger <dkrieger@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Mark Sliva 提交于
The interface addresses used for replication will be scanned using this new utility we added called ifaddrs that returns all of the interface addresses separated by newlines. As an internal utility, this will be installed into $GPHOME/libexec. There is no Python 2 library that provides this functionality, so we add it ourselves. Also add a configure dependency on getifaddrs and inet_ntop, which are now required to build a functioning GPDB system. As far as we can tell, the other headers and functions are already handled through other configure checks. (cherry picked from commit 15a30510) Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NDavid Krieger <dkrieger@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 03 1月, 2020 3 次提交
-
-
由 Huiliang.liu 提交于
cherry pick from gpdb master. gpload will run in GPDB6 compatibility mode if imports gpVersion failed
-
由 Huiliang.liu 提交于
GPload: change metadata query SQL to improvement performance Old query SQL may take long time if catalog is large.
-
由 Ashwin Agrawal 提交于
gpdeletesystem uses GpDirsExist() to check if dump directories are present to warn and avoid deleting the cluster. Only if "-f" option is used allowed to delete the cluster with dump directories present. Though this function incorrectly checks for files and directories with name "*dump*" and not just directories. So, gpdeletesystem started failing after commit eb036ac1. FTS writes file with name of file as `gpsegconfig_dump`. GpDirsExist() incorrectly reports this as backup directory present and fails. Fix the same by only checking for directories and not files. Fixes https://github.com/greenplum-db/gpdb/issues/8442Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
- 31 12月, 2019 1 次提交
-
-
由 Paul Guo 提交于
This helps script handling by checking return values. Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
- 12 12月, 2019 9 次提交
-
-
由 Ning Yu 提交于
By setting shell=True to python's subprocess.Popen() it composes a command string and launches a shell to parse and execute the command, this may not handle the spaces and other special characters correctly, and may cause security issues. We should execute the commands with shell=False, the default value, and pass the command as an args list, so spaces and other special characters can be correctly parsed. (cherry picked from commit 4f07bf1b)
-
由 Ning Yu 提交于
Packcore uses the tar command to create a tarball for the coredump, the tar command can fail due to kinds of reasons, such as "permission denied", "no space left on device", or can not find the gzip command. The tar command will not remove the incomplete tarball or failures, we should remove it explicitly. (cherry picked from commit 3c6b0bf1)
-
由 Ning Yu 提交于
Pack dir is a temporary directory used by packcore to store the coredump, the postgres binary and the shared libraries, then once tarball is created the pack dir will be removed. Packcore will error out if the pack dir already exists, the purpose is to prevent removing the user data unexpectedly. However even it errors out it will still remove the pack dir during its cleanup. Fixed the logic so an pre-existing pack dir is not changed or removed by packcore. (cherry picked from commit a700419d)
-
由 Ning Yu 提交于
runGDB.sh is a convenient helper script to load the coredump, now we allow passing extra gdb arguments to it via cmdline like below: ./runGDB.sh --batch -ex 'bt' This makes it more friendly in automated scripts. Also refactored the code to use python multiline string instead of multiple prints. (cherry picked from commit 0ab035be)
-
由 Ning Yu 提交于
Gdb used to show verbose information on every loaded shared libraries when loading a coredump, packcore used to rely on them to list the libraries. However with higher version of gdb, such as the one shipped on ubuntu 18.04, the information is not printed anymore. So we should use the gdb `info sharedlibrary` command to list the libraries explicitly. Also set some proper gdb cmdline options for better handling: - `--batch`: exit after processing, so we do not need to execute the `quit` command explicitly; - `--nx`: do not read any `.gdbinit` files; (cherry picked from commit 9e6a3d52)
-
由 Ning Yu 提交于
packcore uses the file command to find the postgres binary path, it searches for the first single-quoted string and uses it as the path. However let's look at an output of the file command on centos 6, manually broken into multiple lines: core.45685: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'postgres: 7000, gpadmin isolation2test [local] con7 cmd5 CREATE TABLE', real uid: 500, effective uid: 500, real gid: 501, effective gid: 501, execfn: '/usr/local/greenplum-db-devel/bin/postgres', platform: 'x86_64' The first single-quoted string is actually the status line as shown in the ps command, we should use the `execfn` field as the path. (cherry picked from commit 8e249c41)