- 17 1月, 2018 1 次提交
-
-
由 Heikki Linnakangas 提交于
A long time ago, they were disabled, because the GIN code had not been modified to work with file replication. Now with WAL replication, there's no reason to not support them.
-
- 15 1月, 2018 3 次提交
-
-
由 Daniel Gustafsson 提交于
Since ICW is a very longrunning process, attempt to reduce time consumption by reducing overhead during testing while keeping the test constant (coverage not reduced). Avoid dropping/recreating underlying test tables when not required, reduce the number of partitions in some cases and skip pointless drops for objects we know doesn't exist. In total this shaves about 30-40 seconds off an ICW run on my local machine, mileage may vary.
-
由 Daniel Gustafsson 提交于
Make sure that we are able to exchnage in an external partition, and also ensure that truncation doesn't recurse into the ext partition.
-
由 Daniel Gustafsson 提交于
The timezone data in Greenplum are from the base version of PostgreSQL that the current version of Greenplum is based on. This cause issues since it means we are years behind on tz changes that have happened. This pulls in the timezone data and code from PostgreSQL 10.1 with as few changes to Greenplum as possible to minimize merge conflicts. The goal is to gain data rather than features, and for Greenplum for each release to be able to stay current with the iana tz database as it is imported into upstream PostgreSQL. This removes a Greenplum specific test for the Yakutsk timezone as it was made obsolete by upstream tz commit 1ac038c2c3f25f72.
-
- 13 1月, 2018 6 次提交
-
-
由 Jamie McAtamney 提交于
We will not be supporting these utilities in GPDB 6. References to gpcrondump and gpdbrestore in the gpdb-doc directory have been left intact, as the documentation will be updated to refer to gpbackup and gprestore in a separate commit. Author: Jamie McAtamney <jmcatamney@pivotal.io>
-
由 Heikki Linnakangas 提交于
The test was sensitive to the number of pages in the pg_rewrite system table's index, for no good reason. Also, don't create a new database for it, to speed it up.
-
由 Heikki Linnakangas 提交于
pg_partitions contains calls to pg_get_expr() function. That function suffers from a race condition: If the relation is dropped between the get_rel_name() call, and another syscache lookup in pg_get_expr_worker(), you get a "relation not found" error. The error message is reasonable, and I don't see any easy fix for the pg_partitions view itself, so just try to avoid hitting that in the tests. For some reason we are hitting that frequently in this particular query. Change it to query pg_class instead, it doesn't use any of the more complicated fields from pg_partitions, anyway. I'm pushing this to the 'walreplication' branch first, because for some reason, we're seeing the failure there more often than on 'master'. If this fixes the problem, I'll push this to 'master', too.
-
由 Heikki Linnakangas 提交于
These were left over when Persistent Tables and Filerep were removed.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
* Revert almost all the changes in smgr.c / md.c, to not go through the Mirrored* APIs. * Remove mmxlog stuff. Use upstream "pending relation deletion" code instead. * Get rid of multiple startup passes. Now it's just a single pass like in the upstream. * Revert the way database drop/create are handled to the way it is in upstream. Doesn't use PT anymore, but accesses file system directly, and WAL-logs a single CREATE/DROP DATABASE WAL record. * Get rid of MirroredLock * Remove a few tests that were specific to persistent tables. * Plus a lot of little removals and reverts to upstream code.
-
- 12 1月, 2018 1 次提交
-
-
由 Shreedhar Hardikar 提交于
This commit brings in ORCA changes that ensure that a Materialize node is not added under a Filter when its child contains outer references. Otherwise, the subplan is not rescanned (because it is under a Material), producing wrong results. A rescan is necessary for it evaluates the subplan for each of the outer referenced values. For example: ``` SELECT * FROM A,B WHERE EXISTS ( SELECT * FROM E WHERE E.j = A.j and B.i NOT IN ( SELECT E.i FROM E WHERE E.i != 10)); ``` For the above query ORCA produces a plan with two nested subplans: ``` Result Filter: (SubPlan 2) -> Gather Motion 3:1 -> Nested Loop Join Filter: true -> Broadcast Motion 3:3 -> Table Scan on a -> Table Scan on b SubPlan 2 -> Result Filter: public.c.j = $0 -> Materialize -> Result Filter: (SubPlan 1) -> Materialize -> Gather Motion 3:1 -> Table Scan on c SubPlan 1 -> Materialize -> Gather Motion 3:1 -> Table Scan on c Filter: i <> 10 ``` The Materialize node (on top of Filter with Subplan 1) has cdb_strict = true. The cdb_strict semantics dictate that when the Materialize is rescanned, instead of destroying its tuplestore, it resets the accessor pointer to the beginning and the subtree is NOT rescanned. So the entries from the first scan are returned for all future calls; i.e. the results depend on the first row output by the cross join. This causes wrong and non-deterministic results. Also, this commit reinstates this test in qp_correlated_query.sql. It also fixes another wrong result caused by the same issue. Note that the changes in rangefuncs_optimizer.out are because ORCA now no longer falls back for those queries. Instead it produces a plan which is executed on master (instead of the segments as was done by planner) which changes the error messages. Also bump ORCA version to 2.53.8. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 10 1月, 2018 1 次提交
-
-
由 Heikki Linnakangas 提交于
Now that we use the upstream implementation for window functions, the 'gp_enable_sequential_window_plans' and 'gp_idf_deduplicate' GUCs are no longer.
-
- 06 1月, 2018 1 次提交
-
-
由 Sambitesh Dash 提交于
Instead of assuming that casts are always binary coercible (and hence that we could get away with just dropping them), translate casts in ORCA plans into either a RelabelType or a FuncExpr. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 05 1月, 2018 2 次提交
-
-
由 Kavinder Dhaliwal 提交于
-
由 Jesse Zhang 提交于
The `gporca` regression test suite uses a schema but doesn't really switch `search_path` to the schema that's meant to encapsulate most of the objects it uses. This has led to multiple instances where we: 1. Either used a table from another namespace by accident; 2. Or we leaked objects into the public namespace that other tests in turned accidentally depended on. As we were about to add a few user-defined types and casts to the test suite, we want to (at last) ensure that all future additions are scoped to the namespace. Signed-off-by: NSambitesh Dash <sdash@pivotal.io> Closes #4238
-
- 03 1月, 2018 3 次提交
-
-
由 Shreedhar Hardikar 提交于
* Fix 4 (out of 64) windowerr tests that use row_number() to be non-deterministic * Fix remaining 58 (out of 64) tests. There was a difference in the results between planner and optimizer due to different value of row_number assigned. row_number() is inherently non-deterministic in GPDB. For example, for the following query: select row_number() over (partition by b) from foo; Let's say that foo was not distributed by b. In this case, to compute the WindowAgg, we would first have to redistribute the table on b (or gather all the tuples on master). Thus, for rows having the same b value, the row_number assigned depends on the order in which they are received by the WindowAgg - which is non-deterministic. In qp_olap_windowerr.sql tests, we mitigate this by forcing an order on ord column, which is unique in this context, making it easier to compare test results. * Remove FIXME comment and enable optimizer_trace_fallback Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Haisheng Yuan 提交于
-
由 Ekta Khanna 提交于
The new tests added as part of the postgres merge, contain tests of window functions like: last_value, nth_value without specifying the ORDER BY clause in the window function. Due to Redistribute Motion getting added, the order is not deterministic without an explicit ORDER BY clause within the window function. This commit updates such tests for the relevant changes in ORCA (https://github.com/greenplum-db/gporca/commit/855ba856fdc59e88923523f1f8b2ead32ae32364). Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 02 1月, 2018 5 次提交
-
-
由 Heikki Linnakangas 提交于
The ao_ptotal() test function was broken a long time ago, in the 8.3 merge, by the removal of implicit cast from text to integer. You just got an "operator does not exist: text > integer" error. However, the queries using the function were inside start/end_ignore blocks, so that didn't We have tests on tupcount elsewhere, in the uao_* tests, for example. Whether the table is partitioned or not doesn't seem very interesting. So just remove the test queries, rather than try to fix them. (I don't understand what the endianess issue mentioned in the comment might've been.) I kept the test on COPY with REJECT LIMIT on partitioned table. I'm not sure how interesting that is either, but it wasn't broken. While at it, I reduced the number of partitions used, though, to shave off a few milliseconds from the test.
-
由 Heikki Linnakangas 提交于
Commit e314acb1 changed the 'count_operator' helper function to include the EXPLAIN, so that it doesn't need to be given in the argument query anymore. But many of the calls of count_operator were not changed, and still contained EXPLAIN in the query, and as a result, they failed with 'syntax error at or near "explain"'. These syntax errors were accidentally memorized in the expected output. Revert the expected output to what it was before, and remove the EXPLAIN from the queries instead.
-
由 Heikki Linnakangas 提交于
We have these exact same tests twice, with and without scema-qualifying the table name. That's hardly a meaningful difference, when testing the grammar of the SUBPARTITION TEMPLATE part. Remove the duplicated tests. (I'm not convinced it's useful to have even a single copy of these tests, but keep for now.)
-
由 Heikki Linnakangas 提交于
Both bfv_partition and partition_ddl had the essentially the same test. Keep the copy in partition_ddl, and move the "alter table" commands that were only present in the bfv_partition copy there.
-
由 Heikki Linnakangas 提交于
These negative tests throw an error in the parse analysis phase already. Whether the target table is an AO or AOCO table is not interesting.
-
- 28 12月, 2017 3 次提交
-
-
由 Nadeem Ghani 提交于
The gp_bloat_expected_pages.btdexppages column is numeric, but it was passed to the function gp_bloat_diag() as integer in the definition of the view of the same name gp_bloat_diag. This caused integer overflow errors when the number of expected pages exceeded max integer limit for columns with very large widths. This changes the function signature and call to use numeric for the btxexppages paramter. Adding a simple test to mimic the cutomer issue. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Test files are also updated in this commit as now we don't generate cross join alternative if an input join was present. cross join contains CScalarConst(1) as the join condition. if the input expression is as below with cross join at top level between CLogicalInnerJoin and CLogicalGet "t3" +--CLogicalInnerJoin |--CLogicalInnerJoin | |--CLogicalGet "t1" | |--CLogicalGet "t2" | +--CScalarCmp (=) | |--CScalarIdent "a" (0) | +--CScalarIdent "b" (9) |--CLogicalGet "t3" +--CScalarConst (1) for the above expression (lower) predicate generated for the cross join between t1 and t3 will be: CScalarConst (1) In only such cases, donot generate such alternative with the lower join as cross join example: +--CLogicalInnerJoin |--CLogicalInnerJoin | |--CLogicalGet "t1" | |--CLogicalGet "t3" | +--CScalarConst (1) |--CLogicalGet "t2" +--CScalarCmp (=) |--CScalarIdent "a" (0) +--CScalarIdent "b" (9) Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Xin Zhang 提交于
If the first insert into AOCS table aborted, the visible blocks in the block directory should be greater than 1. By default, we initialize the `DatumStreamWriter` with `blockFirstRowNumber=1` for newly added columns. Hence, the first row numbers are not consistent between the visible blocks. This caused inconsistency between the base table scan vs. the scan using indexes through block directory. This wrong result issue is only happened to the first invisible blocks. The current code (`aocs_addcol_endblock()` called in `ATAocsWriteNewColumns()`) already handles other gaps after the first visible blocks. The fix updates the `blockFirstRowNumber` with `expectedFRN`, and hence fixed the mis-alignment of visible blocks. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
- 22 12月, 2017 2 次提交
-
-
由 sambitesh 提交于
This query was added to "percentile" in commit b877cd43 as outer references were allowed after we backported ordered-set aggregates. But the query was using an ARRAY sublink, which semantically does not guarantee ordering. This causes sporadic failure in tests. This commit tweaks the test query so that it has a deterministic ordering in the output array. We considered just adding an ORDER BY to the subquery, but ultimately we chose to use `array_agg` with an `ORDER BY` because subquery order is not preserved per SQL standard. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Jacob Champion 提交于
The portals_updatable test makes use of the public.bar table, which the partition_pruning test occasionally dropped. To fix, don't fall back to the public schema in the partition_pruning search path. Put the temporary functions in the partition_pruning schema as well for good measure. Author: Asim R P <apraveen@pivotal.io> Author: Jacob Champion <pchampion@pivotal.io>
-
- 21 12月, 2017 2 次提交
-
-
由 Haisheng Yuan 提交于
This reverts commit 4fac169fb1204de54a05ac14fba1a5e4d9f82c08.
-
由 Haisheng Yuan 提交于
-
- 14 12月, 2017 2 次提交
-
-
由 Pengzhou Tang 提交于
When declaring a cursor, QD will not block until QE got a snapshot and set up a interconnect, so we can not guarantee that QD always see 'qe_got_snapshot_and_interconnect' was triggered. For the case itself, if the writer QE can get slower than the reader QE without the help of fault injector, it's even better.
-
由 Max Yang 提交于
as same as upstream. In upstream, internal_bpchar_pattern_compare compare inputs by ignoring ending space. But GPDB it just use whole string compare. The bug didn't appear because the before merging PG_MERGE_84 GPDB just use TableScan when executing following query, but after PG_MERGE_84, IndexScan is used, and internal_bpchar_pattern_compare will be used for index: create table tbl(id int4, v char(10)); create index tbl_v_idx_bpchar on tbl using btree(v bpchar_pattern_ops); insert into tbl values (1, 'abc'); explain select * from tbl where v = 'abc '::char(20); select * from tbl where v = 'abc '::char(20); Author: Xiaoran Wang <xiwang@pivotal.io>
-
- 13 12月, 2017 5 次提交
-
-
When translating the query to DXL, Orca maps the attno to column id in a hash map. The hashmap failed to insert values for key that is already in the hashmap. Orca translator will fail the assertion when the key value pair is not inserted successfully. When gpdb is build in non-debug mode, we don't see this Orca falling back and it produces the correct plan. So it is OK to remove assertion and release the memory allocated for the key and value when the insertion is failed.
-
由 Andreas Scherbaum 提交于
* Make SPI work with 64 bit counters Fix GET DIAGNOSTICS Remove the earlier introduced SPI_processed64 variable This includes the following upstream patches: https://github.com/greenplum-db/gpdb/commit/23a27b039d94ba359286694831eafe03cd970eef https://github.com/greenplum-db/gpdb/commit/f3f3aae4b7841f4dc51129691a7404a03eb55449 https://github.com/greenplum-db/gpdb/commit/ab737f6ba9fc0a26d32a95b115d5cd0e24a63191 The https://github.com/greenplum-db/gpdb/commit/74a379b984d4df91acec2436a16c51caee3526af is not yet included, because repalloc_huge() is not yet backported.
-
由 Shreedhar Hardikar 提交于
The original name was deceptive because this check is also done for QE slices that run on master. For example: EXPLAIN SELECT * FROM func1_nosql_vol(5), foo; QUERY PLAN -------------------------------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) (cost=0.30..1.37 rows=4 width=12) -> Nested Loop (cost=0.30..1.37 rows=2 width=12) -> Seq Scan on foo (cost=0.00..1.01 rows=1 width=8) -> Materialize (cost=0.30..0.33 rows=1 width=4) -> Broadcast Motion 1:3 (slice1) (cost=0.00..0.30 rows=3 width=4) -> Function Scan on func1_nosql_vol (cost=0.00..0.26 rows=1 width=4) Settings: optimizer=off Optimizer status: legacy query optimizer (8 rows) Note that in the plan, the function func1_nosql_vol() will be executed on a master slice with Gp_role as GP_ROLE_EXECUTE. Also, update output files Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Shreedhar Hardikar 提交于
We don't want to use the optimizer for planning queries in SQL, pl/pgSQL etc. functions when that is done on the segments. ORCA excels in complex queries, most of which will access distributed tables. We can't run such queries from the segments slices anyway because they require dispatching a query within another - which is not allowed in GPDB. Note that this restriction also applies to non-QD master slices. Furthermore, ORCA doesn't currently support pl/* statements (relevant when they are planned on the segments). For these reasons, restrict to using ORCA on the master QD processes only. Also revert commit d79a2c7f ("Fix pipeline failures caused by 0dfd0ebc.") and separate out gporca fault injector tests in newly added gporca_faults.sql so that the rest can run in a parallel group. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Sambitesh Dash 提交于
Prior to the postgres window merge, `aggdistinct` in `AggRef` was a bool and passed as is to GPORCA. Now, this is a list of `SortGroupClause` objects for `Aggref`. This commit, translates `aggdistinct` into a bool to pass to ORCA and while translating from DXL to PLNSTMT, a list of `SortGroupClause` is created from the corresponding args in `AggRef`. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 09 12月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
This code has been modified in GPDB, to turn "OVER (w)" into the same as "OVER w". (Whether that's a good idea is another debate, but that's what we have now, for compatibility with GPDB 5 and below). But the error printing code was wrong, and printed 'window "(null)" does not exist", for the OVER (w) syntax, if 'w' was not a valid window name.
-
由 Heikki Linnakangas 提交于
When a backend exits normally, the "pg_temp_<sessionid>" schema is dropped. In GPDB 5, with the 8.3 merge, there is now a "pg_temp_toast_<sessionid>" schema in addition to the temp schema, but it was not dropped. As a result, you would end up with a lot of unused pg_temp_toast_* schemas. To fix, also drop the temp toast schema at backend exit. We will still leak temp schemas, and temp toast schemas, if a backend exits abnormally, or if the server crashes. That's not a new issue, but we should probably do something about that in the future, too. Fixes github issue #4061. Backport to 5x_STABLE, where the toast temp namespaces were introduced.
-
由 Jacob Champion 提交于
Upstream commit 43a57cf3, which significantly changes the API for the HashBitmap (TIDBitmap in Postgres), is about to hit in an upcoming merge. This patch is a joint effort by myself, Max Yang, Xiaoran Wang, Heikki Linnakangas, and Daniel Gustafsson to reduce our diff against upstream and support the incoming API changes with our GPDB-specific customizations. The primary goal of this patch is to support concurrent iterations over a single StreamBitmap or TIDBitmap. GPDB has made significant changes to allow either one of those bitmap types to be iterated over without the caller necessarily needing to know which is which, and we've kept that property here. Here is the general list of changes: - Cherry-pick the following commit from upstream: commit 43a57cf3 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Sat Jan 10 21:08:36 2009 +0000 Revise the TIDBitmap API to support multiple concurrent iterations over a bitmap. This is extracted from Greg Stark's posix_fadvise patch; it seems worth committing separately, since it's potentially useful independently of posix_fadvise. - Revert as much as possible of the TIDBitmap API to the upstream version, to avoid unnecessary merge hazards in the future. - Add a tbm_generic_ version of the API to differentiate upstream's TIDBitmap-only API from ours. Both StreamBitmap and TIDBitmap can be passed to this version of the API. - Update each section of code to use the new generic API. - Fix up some memory management issues in bitmap.c that are now exacerbated by our changes.
-