- 22 10月, 2020 1 次提交
-
-
由 Kalen Krempely 提交于
Use GPHOME directly. There is no need for GPSEARCH. This enables demo_cluster.sh to handle symlinks which is useful when GPDB is installed using RPMs and a demo cluster is desired. (cherry picked from commit 6932195a) Co-authored-by: NJamie McAtamney <jmcatamney@vmware.com>
-
- 21 10月, 2020 1 次提交
-
-
由 Jinbao Chen 提交于
The result of NULL not in an unempty set is false. The result of NULL not in an empty set is true. But if an unempty set has partitioned locus. This set will be divided into several subsets. Some subsets may be empty. Because NULL not in empty set equals true. There will be some tuples that shouldn't exist in the result set. The patch disable the partitioned locus of inner table by removing the join clause from the redistribution_clauses. this commit cherry pick from 6X_STABLE 8c93db54f3d93a890493f6a6d532f841779a9188 Co-authored-by: NHubert Zhang <hubertzhang@apache.org> Co-authored-by: NRichard Guo <riguo@pivotal.io>
-
- 16 10月, 2020 1 次提交
-
-
由 Shreedhar Hardikar 提交于
gpfdist uses the global xid & timestamp to distinguish whether each connection belongs to the same external scan or not. ORCA generates a unique scan number for each ExternalScan within the same plan, but not accross plans. So, within a transaction, we may issue multiple external scans that do not get differentiated properly, producing different results. This commit patches that by using a different scan number accross plans, just like what planner does. Ideally gpfdist should also take into account the command-id of the query to prevent this problem for other cases such as prepared statements.
-
- 14 10月, 2020 1 次提交
-
-
由 Chris Hajas 提交于
Corresponding ORCA commit: "Disallow physical index scans in Orca to have predicate condition with subquery"
-
- 08 10月, 2020 1 次提交
-
-
由 Brent Doil 提交于
The task is being used in the 5X_release pipeline. Authored-by: NBrent Doil <bdoil@vmware.com>
-
- 03 10月, 2020 2 次提交
-
-
由 Jesse Zhang 提交于
-
由 Amil Khanzada 提交于
- We're not sure when this file became abandoned, but it doesn't seem to be being used anywhere. - Also remove task file and bash scripts that were only referenced by this pipeline. Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> (cherry picked from commit b55a0b71)
-
- 02 10月, 2020 4 次提交
-
-
由 Jesse Zhang 提交于
In the same vein as commit 868fd2d7, this adds a very quick clang-format job that checks for conformance, similar to the Travis CI job did in commit 54273fdd (greenplum-db/gpdb#10357). (cherry picked from commit 6aac45f3)
-
由 Jesse Zhang 提交于
This patch adds a simple job that checks for the clang-format conformance, similar to the Travis CI job did in commit 54273fdd (greenplum-db/gpdb#10357). I was not amused when I realized the gen_pipeline.py doesn't do the due diligence of finding transitive dependencies (so the new job actually already blocks release, but the script insists on stopping at immediate blockers). But that's left for another day. (cherry picked from commit 868fd2d7)
-
由 Jesse Zhang 提交于
For context, the registry-image resource was introduced more than two years ago as a drop-in replace for the docker-image resource (especially when you're only fetching, not building). It's leaner, less resource-intense, faster, and doesn't rely on spawning a Docker daemon. Also swept up in this patch are four unused files that are left behind by previous changes. See https://github.com/concourse/registry-image-resource for more. This is spiritually a cherry-pick of be6ff30f, but I'm only touching the 5X_STABLE pipeline and the PR pipeline because they are in my way of backporting 868fd2d7 and 6aac45f3 (clang-format check).
-
由 Jesse Zhang 提交于
In the same vein of commits a5fb66d3 and d1f74955. fly validate-pipeline (and set-pipeline) warns about it being deprecated. This is also slightly in my way of backporting 868fd2d7 and 6aac45f3.
-
- 01 10月, 2020 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
If any segment host was down and the temporary or transaction filespace were moved to a different filespace (i.e other than default), it used to do a SSH to update the filespace content in a file on the down host and will fail as the host is down. This will cause utilities by gprecoverseg to fail even though if the segments on the down host are excluded from the gprecoverseg input file.
-
- 30 9月, 2020 1 次提交
-
-
由 Jesse Zhang 提交于
This includes the commit on master, 2 back branches, and the standalone gporca repository. (cherry picked from commit 7ab6b7a4)
-
- 29 9月, 2020 6 次提交
-
-
由 Jesse Zhang 提交于
The canonical config file is in src/backend/gpopt/.clang-format (instead of under the non-existent src/backend/gporca), I've created one (instead of two) symlink, for GPOPT headers. Care has been taken to repoint the symlink to the canonical config under gpopt, instead of gpopt as it is under HEAD. This is spiritually a cherry-pick of commit 2f7dd76c. (cherry picked from commit 2f7dd76c)
-
由 Jesse Zhang 提交于
Add a stage called "check-format" that runs before the fan-out of installcheck-small. While we're at it, also enforce that the full config file is generated from the intent file. This should be fairly quick. On my laptop, the `fmt chk` step takes 3 seconds. (On this backbranch, it actually takes 1 second on my laptop). Care has been taken to add back the matrix syntax in Travis CI config so it's still valid. Also subtle is the explicit mention of the "test" stage in the matrix so as not to lose it in the expansion, as this backbranch effectively doesn't have a build matrix (every "matrix key" has at most one value). Enforcement in Concourse is forthcoming. (cherry picked from commit 54273fdd)
-
由 Jesse Zhang 提交于
This is intended for both local developer use and for CI. This depends on GNU parallel. One-time install: macOS: brew install parallel clang-format Debian: apt install parallel clang-format-10 To format all ORCA / GPOPT code: $ src/tools/fmt fmt To check for formatting conformance: $ src/tools/fmt chk To modify the configuration, you'll need two steps: 1. Edit clang-format.intent.yml 2. Generate the expanded configuration file: $ src/tools/fmt gen This commit also adds a formatting README, To document some of the rationale behind tooling choice. Also mention the new `README.format.md` from both the style guide and ORCA's main README. (cherry picked from commit 57b744c1)
-
由 Jesse Zhang 提交于
Generated using clang-format-10 This is spiritually a cherry-pick of commit 16b48d24, but I have to tweak things a bit by moving the .clang-format file from src/backend/gporca to under src/backend/gpopt (cherry picked from commit 16b48d24)
-
由 Shreedhar Hardikar 提交于
In a previous ORCA version (3.311) we added code to fall back gracefully when a subquery select list contains a single outer ref that is not part of an expression, such as in select * from foo where a is null or a = (select foo.b from bar) This commit adds a fix that allows us to handle such queries in ORCA by adding a project in the translator that will echo the outer ref from within the subquery, and using that projected value in the select list of the subquery. This ensures that we use a NULL value for the scalar subquery in the expression for the outer ref when the subquery returns no rows. Also note that this is still skipped for grouping cols in the target list. This was done to avoid regression for certain queries, such as: select * from A where not exists (select sum(C.i) from C where C.i = A.i group by a.i); ORCA is currently unable to decorrelate sub-queries that contain project nodes, So, a `SELECT 1` in the subquery would also cause this regression. In the above query, the parser adds `a.i` to the target list of the subquery, that would get an echo projection (as described above), and thus would prevent decorelation by ORCA. For this reason, we decided to maintain existing behavior until ORCA is able to handle projections in subqueries better. Also add ICG tests. Co-authored-by: NHans Zeller <hzeller@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Hans Zeller 提交于
The corresponding ORCA PR is https://github.com/greenplum-db/gporca/pull/605
-
- 26 9月, 2020 1 次提交
-
-
由 xiong-gang 提交于
We had several issues on 5x and 6x that shows the resource group wait queue is somehow corrupted, this commit add some checks and PANIC at the earliest to help debugging.
-
- 22 9月, 2020 1 次提交
-
-
由 Kalen Krempely 提交于
Co-authored-by: NXin Zhang <zhxin@vmware.com> Co-authored-by: NBrent Doli <bdoil@vmware.com>
-
- 21 9月, 2020 1 次提交
-
-
由 Jinbao Chen 提交于
We hit interconnect hung issue many times in many cases, all have the same pattern: the downstream interconnect motion senders keep sending the tuples and they are blind to the fact that upstream nodes have finished and quitted the execution earlier, the QD then get enough tuples and wait all QEs to quit which cause a deadlock. Many nodes may quit execution earlier, eg, LIMIT, HashJoin, Nest Loop, to resolve the hung issue, they need to stop the interconnect stream explicitly by calling ExecSquelchNode(), however, we cannot do that for rescan cases in which data might lose, eg, commit 2c011ce4. For rescan cases, we tried using QueryFinishPending to stop the senders in commit 02213a73 and let senders check this flag and quit, that commit has its own problem, firstly, QueryFini shPending can only set by QD, it doesn't work for INSERT or UPDATE cases, secondly, that commit only let the senders detect the flag and quit the loop in a rude way (without sending the EOS to its receiver), the receiver may still be stuck inreceiving tuples. This commit revert the QueryFinishPending method firstly. To resolve the hung issue, we move TeardownInterconnect to the ahead of cdbdisp_checkDispatchResult so it guarantees to stop the interconnect stream before waiting and checking the status of QEs. For UDPIFC, TeardownInterconnect() remove the ic entries, any packets for this interconnect context will be treated as 'past' packets and be acked with STOP flag. For TCP, TeardownInterconnect() close all connection with its children, the children will treat any readable data in the connection as a STOP message include the closure operation. this commit backport from master ec1d9a70
-
- 19 9月, 2020 2 次提交
-
-
由 Polina Bungina 提交于
Execution of a long enough query containing multi-byte characters can cause incorrect truncation of the query string. Incorrect truncation implies an occasional cut of a multi-byte character and (with log_min_duration_statement set to 0 ) subsequent write of an invalid symbol to segment logs. Such broken character present in logs produces problems when trying to fetch logs info from gp_toolkit.__gp_log_segment_ext table - queries fail with the following error: «ERROR: invalid byte sequence for encoding…». This is caused by buildGpQueryString function in `cdbdisp_query.c`, which prepares query text for dispatch to QE. It does not take into account character length when truncation is necessary (text is longer than QUERY_STRING_TRUNCATE_SIZE). (cherry picked from commit f31600e9)
- 18 9月, 2020 2 次提交
-
-
由 xiong-gang 提交于
When client_encoding is dispatch to QE, error messages generated in QEs were converted to client_encoding, but QD assumed that they were in server encoding, it will leads to corruption. This is fixed in 6X in a6c9b4, but this skips the gpcopy changes since 5X doesn't support syntax 'COPY...ENCODING'. Fix issue: https://github.com/greenplum-db/gpdb/issues/10815
-
由 David Kimura 提交于
Function `RelationGetIndexList()` does not filter out invalid indexes. That responsiblity is left to the caller (e.g. `get_relation_info()`). Issue is that Orca was not checking index validity. This commit also introduces an optimization to Orca that is already used in Planner whereby we first check relhasindex before checking pg_index. (cherry picked from commit b011c351)
-
- 17 9月, 2020 2 次提交
-
-
由 Asim R P 提交于
This bug was found in a production environment where vacuum on gp_persistent_relation was concurrently running with a backend performing end-of-xact filesystem operations. And the GUC debug_persistent_print was enabled. The *_ReadTuple() function was called on a persistent TID after the corresponding tuple was deleted with frozen transaction ID. The concurrent vacuum recycled the tuple and it led to a SIGSEGV when the backend tried to access values from the tuple. Fix it by avoiding the debug log message in case when the persistent tuple is freed (transitioning to FREE state). All other state transitions are logged. In absence of concurrent vacuum, things worked just fine because the *_ReadTuple() interface reads tuples from persistent tables directly using TID.
-
由 Weinan WANG 提交于
GPDB does not support FK, but keep FK grammar in DDL, since it reduce DB migration manual workload from others. Hence, we do not need FK check for truncate command, rid of it.
-
- 11 9月, 2020 1 次提交
-
-
由 Peter Eisentraut 提交于
Using exit() requires stdlib.h, which is not included. Use return instead. Also add return type for main(). Reviewed-by: NHeikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: NThomas Munro <thomas.munro@enterprisedb.com> (cherry picked from commit 1c0cf52b) (cherry picked from commit 6d3c99bb)
-
- 10 9月, 2020 5 次提交
-
-
由 Jesse Zhang 提交于
This file will be used to record commits to be ignored by default by git-blame (user still has to opt in). This is intended to include large (generally automated) reformatting or renaming commits. (cherry picked from commit b19e6abb)
-
由 Kalen Krempely 提交于
When the standby is unreachable and the user proceeds with startup, gpstart fails to start when temporary or transaction files have been moved to a non-default filespace. To determine when the standby is unreachable fetch_tli was reworked to raise a StandbyUnreachable exception. And the standby is not started if it is unreachable. Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@vmware.com>
-
由 Jacob Champion 提交于
Add a failing behave test to ensure that gpstart prompts and continues successfully if the standby host is unreachable. The subsequent commit will fix the test case. Co-authored-by: NKalen Krempely <kkrempely@vmware.com>
-
由 Lisa Owen 提交于
-
由 David Kimura 提交于
This approach special cases gp_segment_id enough to include the column as a distributed column constraint. It also updates direct dispatch info to be aware of gp_segment_id which represents the raw value of the segment where the data resides. This is different than other columns which hash the datum value to decide where the data resides. After this change the following DDL shows Gather Motion from 2 segments on a 3 segment demo cluster. ``` CREATE TABLE t(a int, b int) DISTRIBUTED BY (a); EXPLAIN SELECT gp_segment_id, * FROM t WHERE gp_segment_id=1 or gp_segment_id=2; QUERY PLAN ------------------------------------------------------------------------------- Gather Motion 2:1 (slice1; segments: 2) (cost=0.00..431.00 rows=1 width=12) -> Seq Scan on t (cost=0.00..431.00 rows=1 width=12) Filter: ((gp_segment_id = 1) OR (gp_segment_id = 2)) Optimizer: Pivotal Optimizer (GPORCA) (4 rows) ``` (cherry picked from commit 10e2b2d9) * Bump ORCA version to 3.110.0
-
- 09 9月, 2020 4 次提交
-
-
由 Kalen Krempely 提交于
This reverts commit 5bab25f7. The following test_movetransfiles.py tinc tests are failing from the cs_walrep_1 job: test_standby_is_configured test_transfiles_are_moved test_tempfiles_are_moved gpfilespace --showtransfilespace and --showtempfilespace return with a non-zero exit code.
-
由 Kalen Krempely 提交于
This reverts commit 05071344.
-
由 Kalen Krempely 提交于
When the standby is unreachable and the user proceeds with startup, gpstart fails to start when temporary or transaction files have been moved to a non-default filespace. To determine when the standby is unreachable fetch_tli was reworked to raise a StandbyUnreachable exception. And the standby is not started if it is unreachable. Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@vmware.com>
-
由 Jacob Champion 提交于
Add a failing behave test to ensure that gpstart prompts and continues successfully if the standby host is unreachable. The subsequent commit will fix the test case. Co-authored-by: NKalen Krempely <kkrempely@vmware.com>
-
- 04 9月, 2020 1 次提交
-
-
由 xiong-gang 提交于
commit 4f5a2c23 breaks the unittest cdbtm_test
-
- 03 9月, 2020 1 次提交
-
-
由 Hubert Zhang 提交于
-