- 15 8月, 2019 6 次提交
-
-
由 Richard Guo 提交于
When generating plan for multi-DQA, we need to build a coplan on base of the shared scan for each distinct DQA, and then join the coplans back together. And while building the coplan for each DQA, the arrays for RelOptInfo and RangeTblEntry for the PlannerInfo would be rebuild and the original arrays would have been replaced. So, for the DQA other than the first one, we need to restore the arrays to what they are like for the original shared scan plan before building coplan for it. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: Npengzhout <ptang@pivotal.io>
-
由 Soumyadeep Chakraborty 提交于
The movedb_failure_callback existed to ensure that if copydir fails inside movedb, it would clean up the destination directory. We introduced the pending deletes mechanism to ensure cleanup throughout 2PC in b0208894. These are two conflicting mechanisms, especially evident in the following scenario: 1. An error occurs after the ScheduleDbDirDelete on the target tablespace directory and inside the ENSURE block. 2. movedb_failure_callback() is invoked first which calls rmtree() on the target tablespace directory. 3. DoPendingDbDeletes() is called from within the subsequent AbortTransaction(). Since there was a pending db delete, dropDatabaseDirectory() is called and it eventually calls rmtree(). Since the directory was already deleted at this point this rmtree produces the following warnings (which led to a test failure in CI): ``` WARNING: could not open directory <target_dboid_dir>: No such file or directory WARNING: some useless files may be left behind in old database directory <target_dboid_dir> ``` The movedb_failure_callback mechanism is totally unnecessary once we can ensure that the pending deletes mechanism caters to copydir failures. This patch takes care of that by moving the ScheduleDbDirDelete call before the copydir. Note: To minimize upstream diff, we replace the ENSURE block with a standalone block. Co-authored-by: NNing Yu <nyu@pivotal.io> (cherry picked from commit 3adad712) Co-authored-by: NNing Yu <nyu@pivotal.io>
-
由 Richard Guo 提交于
For Result pathnode, its parent would be set to NULL. So we cannot reference its parent in cost_common_agg() when estimating cost for aggregation. Instead, we can use the rows in the Path and use 0 as the width. This patch fixes github issue #8357.
-
由 Bhuvnesh Chaudhary 提交于
ADD CONSTRAINT USING INDEX is a new feature which didn't existing for 4 or 5 branches. This is taken from upstream commit eb7ed3f3. With partition tables, the changes are not propagated to the leaf partitions if the operation is done on the root table, thus disallowed currently. This introduces rel_is_interior_partition to determine if a relation is an interior node in a partition hierarchy. Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
-
由 Ivan Novick 提交于
and add missing options
-
- 14 8月, 2019 16 次提交
-
-
由 Adam Berlin 提交于
Remove dead code from enable_partition_rules GUC removal (#8310) This removes the postCreate member from the CreateStmt stuct, as well as all places using it. With the postCreate entry on the List removed due to being dead, it follows that the entry after it is dead too so remove all doRuleStmt handling too. Discussion: https://github.com/greenplum-db/gpdb/pull/8285 Reported-by: Jesse Zhang (cherry picked from commit 738d919e) (cherry picked from commit 04c64b46)
-
由 Daniel Gustafsson 提交于
The tree I was working off clearly had stale files, which led me to include two utils which were removed some time ago: gpcheckutil.py and gpcheck.py. Remove these two from their respective Makefiles. Also fix a Bash error in the Stream symlink test, the logoical AND requires [[ .. ]]; rather than [ .. ];. Both of these spotted while repeatedly running make install with trees in various states. (cherry picked from commit 9f707e1e)
-
由 Daniel Gustafsson 提交于
Commit feb3a399 removed the example postgresql_conf_gp_additions file for gpinitsystem, but the refactor of the gpMgmt/bin Makefiles didn't get the memo as the file existed when that work was done. Fix by removing the file to avoid warning during make install. (cherry picked from commit 806cc27c)
-
由 Daniel Gustafsson 提交于
The big gpMgmt Makefile rewrite in b5aba18b was a paddle short for make distclean in gpMgmt/bin/lib as it was missing the 'rm' invocation. (cherry picked from commit 20b521ca)
-
由 Daniel Gustafsson 提交于
With the revamp of the gpMgmt Makefiles, the catalog json and stream binary are now compiled on make all, so we should perform them there rather than opt for building on make install. This still leaves some modules which are compiled on make install, such as pygresql, but a small start is still a start. Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit aa1640a6)
-
由 Daniel Gustafsson 提交于
The psi package is no longer installed anywhere, so remove the target from the Makefile. Reported-by: Shaoqi Bai Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit 832896ff)
-
由 Daniel Gustafsson 提交于
Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit 22e8d1df)
-
由 Daniel Gustafsson 提交于
The stream code lacked prototypes proper enough to be allowed through a modern compiler with sane compiler flags, fix by defining the funcs as taking void parameter. Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit 6de21f62)
-
由 Daniel Gustafsson 提交于
Installing the Management utilities used to be pretty brute-force operation which copied more or less everything over blindly and then tried to remove what shouldn't be installed. This is clearly not a terribly clean and sustainable solution, as subsequent issues with it has proven (editor savefiles, patch .rej/.orig files etc were routinely copied and never purged etc). This takes a first stab at turning installation of gpMgmt/bin, sbin and doc into proper recursive make targets which only install the files that were intended to be installed. Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit b5aba18b)
-
由 Daniel Gustafsson 提交于
Break out compilation of stream from the gpMgmt/bin Makefile into its own Makefile which follows the general structure for how programs are compiled in the tree. The resulting binary is installed in bin/lib with a symlink to the previously used location. This also moves stream from src/ as it was the only directory left in there making the indirection pointless. Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit 0cad6637)
-
由 Daniel Gustafsson 提交于
gpfdist was rewritten in C from Python, but the old Python code was never removed from the repo. Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit e12a1129)
-
由 Daniel Gustafsson 提交于
The packcore version has been fixed to 0.4beta since 2014 when it was bumped from 0.3alpha. As it is installed alongside other tools lets version it with the main distribution. Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit 4bfd3875)
-
由 Daniel Gustafsson 提交于
The sbindir is used in for some Python management utilities, but was hardcoded rather than using the proper supplied variable. Exporting from Makefile.global is a first step, next up is to fix all instances where the directory name is hardcoded. Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit 45478fb2)
-
由 Daniel Gustafsson 提交于
The compiled python files ending with 'c' were only to some degree removed on make clean, as the list in the Makefile hadn't been kept in sync with the gitignore list. Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit 14e2f8bc)
-
由 Daniel Gustafsson 提交于
The .gitignore for gpMgmt had lots of entries for long since removed applications. The apps and their respective delete commit were: * gp_df a7405a274c8ad392141ed4006adde8c506467af9 * gpbitmapreindex e9b9081bf948f60aadf4f41a4e3f8ff43ce888ce * gpdbrestore & gpcrondump 8922315e * gpdebug 5ef6b86b * gpfilespace 5a3a39bc * gpkill 29f17dd8 * gpmigrator 68178574 Discussion: https://github.com/greenplum-db/gpdb/pull/8179 Reviewed by Bradford Boyle, Kalen Krempely, Jamie McAtamney and many more (cherry picked from commit c41b48ab)
-
由 Bradford D. Boyle 提交于
The build artifacts in gp-internal-artifacts are not immutable. If `regexp` is used when fetching the resource, then new builds (with the same version) are not detected. This means that fixes in the packaging may not be correctly picked up by a pipeline. Authored-by: NBradford D. Boyle <bboyle@pivotal.io> (cherry picked from commit 563e9f60)
-
- 13 8月, 2019 3 次提交
-
-
由 Bradford D. Boyle 提交于
Authored-by: NBradford D. Boyle <bboyle@pivotal.io>
-
由 Bradford D. Boyle 提交于
As part of melting the MADlib CI snowflakes, this commit updates the pipeline to pull these resources from the same GCS bucket that holds the other GPDB6 build dependencies. Authored-by: NBradford D. Boyle <bboyle@pivotal.io> (cherry picked from commit d381e893)
-
由 Weinan WANG 提交于
gpdb, we pre-execute init plan in ExecutorStart to reduce slice number. However, for some initplans, which require params from the same level node, should follow upstream execution flow. To recognize this initplan's pattern, record `extParam` when creating `subplan` object. Fix issue: #6953
-
- 12 8月, 2019 7 次提交
-
-
由 Asim R P 提交于
The test injects a fault in StartTransaction(). This function is so generic that any concurrent test may trigger the fault, leading to failed or misleading outcome. The test was using old faultinjector, which emitted status of a fault as NOTICE message. NOTICE messages do not appear in isolation2 answer files. So this problem went undetected. (cherry picked from commit d160d5c1)
-
由 Asim R P 提交于
Remove NOTICE messages that follow a gp_inject_fault() select statement. Replace the boolean value with text 'Success:' returned by the new interfae. For reference, the following sed script was used to identify 't' following a ' gp_inject_fault ' line: /^ gp_inject_fault $/{ $!{ N :again1 N s/ t$/ Success:/ t again1 } } (cherry picked from commit 748f86fb)
-
由 Asim R P 提交于
This test has failed at least once due to the terminate query being executed before the to be terminated 'create table' statement. This was evident from master logs. The commit makes it more reliable by injecting a fault and waiting for the fault to be trigggered before executing pg_terminate_backend(). As a side benefit, we no longer need to create any additional table. (cherry picked from commit 63bed5eb)
-
由 Asim R P 提交于
The new interface employs a special libpq connection parameter and a libpq message to convey fault details to destination postmaster. This interface has been in use for some time now. It was getting cumbersome to maintain both the old as well as the new interface. To be clear, we now have only one interface to inject fault and that is "gp_inject_fault()". Existing tests that were using gp_inject_fault2() will be updated in a follow up commit. (cherry picked from commit 83c336ad)
-
由 Asim R P 提交于
If a test decides to fail because not all segments are running, it will be nice to get the details of segments that are found to be not running. The details printed by this patch will only be shown by behave upon a test failure. They will not be shown when a test passes. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NJacob Champion <pchampion@pivotal.io> Reviewed-by: NDavid Krieger <dkrieger@pivotal.io> (cherry picked from commit 2237ef21)
-
- 10 8月, 2019 5 次提交
-
-
由 Shoaib Lari 提交于
When reading the Behave test code for stopping segments, it is not obvious that we are waiting for the segment to stop. Therefore we are explicitly adding the `-w` flag to show the blocking behavior. Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io> (cherry picked from commit 1e946005)
-
由 mkiyama 提交于
-
由 Amil Khanzada 提交于
When there are no bats testcases present, the current coding returned an error due to the glob not being able to expand. This was made clear by the recent removal of the last test, and the discussion in #8270. Rather than using a glob, use ls and if statement. This allows us to keep the unittest framework in place even if there are currently no tests. Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io> (cherry picked from commit fabc6ca5)
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
Any temporary tables created in a session are cleaned up while exiting. However, consider the below actions being performed: 1. A temporary table table is created. (Backend is used for this session) 2. Backend for temp table recieves SIGUSR1 signal -> it invokes procsignal_sigusr1_handler 3. Immediately, a SIGTERM is recieved by the backend before anything is processed in procsignal_sigusr1_handler. Here, SIGTERM will invoke die and it will attemp to remove the temp relations and commit the transaction. While committing the transaction, SyncRepWaitForLSN will be invoked and if the Commit LSN for the temp table session is greater than then LSN shipped to the Standby, the backend will wait for a byte to be written on the fd in WaitLatch and keep on polling until it recieves a byte on the pipe. However, Walsender will not be able to wake the process up, as it uses SIGUSR1 signal to communiate with the backend process and since the signal handler are not re-entrant the backend will not terminate. Before this commit as well, if we are in SIGUSR1 handler, SyncRepWaitForLSN will exit early, however, if the jump to die happened without setting the flag InSIGUSR1Handler, SyncRepWaitForLSN will not realize that its already in SIGUSR1 handler and will hit the issue as described above. In this commit, instead of using the flag we use pthread_sigmask to identify if we are already in SIGUSR1 handler. Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Co-Authored-by: NJesse Zhang <jzhang@pivotal.io>
-
- 09 8月, 2019 1 次提交
-
-
由 Adam Berlin 提交于
Wait until we've inserted the fts_probe skip fault, AND we have observed the fault being hit. This ensures that we've completed the in-progress fts probe before continuing on with a test. Previously, if we don't wait and our test continues forward to inject a panic, then then in-progress fts probe would do its job and promote the mirror. Reported-by: NAsim R P <apraveen@pivotal.io> (cherry picked from commit 38dd2da3)
-
- 07 8月, 2019 2 次提交
-
-
由 Kalen Krempely 提交于
This patch addresses two main scenarios: 1) Allowing multiple probes both internal and external to reuse the same results when appropriate (ie: piggybacking on previous results). Multiple requests should share the same results if they all request before the start of a new fts loop, and after the results of the previous probe. 2) Ensuring fresh results from an external probe. When a request occurs during a current probe in progress, this request should get fresh results rather "piggybacking" or using the current results. We use similar logic as the checkpointer code to detect whether a probe is in progress with a probe start tick and probe end tick. To request a probe, we send a signal requesting a fts results, then wait for a new loop to start, then wait again for that current loop to finish. This implementation uses a busy wait loop, which includes a short sleep. In the future, we can leverage the upstream conditaion variable implementation which enables us to signal multiple fts notify processes. This was done via a manual cherry-pick from a674b6b3025b9dc56c4cb34b3330f8b7bc1bf757. Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NDavid Krieger <dkrieger@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-Authored-by: NAlexandra Wang <lewang@pivotal.io> Co-Authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 dyozie 提交于
-