- 13 12月, 2017 15 次提交
-
-
由 Chuck Litzell 提交于
-
由 Ning Yu 提交于
Signed-off-by: NZhenghua Lyu <zlv@pivotal.io> Signed-off-by: NRichard Guo <riguo@pivotal.io>
-
由 Ning Yu 提交于
Signed-off-by: NZhenghua Lyu <zlv@pivotal.io> Signed-off-by: NRichard Guo <riguo@pivotal.io>
-
由 Zhenghua Lyu 提交于
Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Shreedhar Hardikar 提交于
The default value of Gp_role is set to GP_ROLE_DISPATCH. Which means auxiliary processes inherit this value. FileRep does the same, but also executes queries using SPI on the segment. Which means Gp_role == GP_ROLE_DISPATCH is not a sufficient check for master QD. So, bring back the check on GpIdentity. Author: Asim R P <apraveen@pivotal.io> Author: Shreedhar Hardikar <shardikar@pivotal.io>
-
When translating the query to DXL, Orca maps the attno to column id in a hash map. The hashmap failed to insert values for key that is already in the hashmap. Orca translator will fail the assertion when the key value pair is not inserted successfully. When gpdb is build in non-debug mode, we don't see this Orca falling back and it produces the correct plan. So it is OK to remove assertion and release the memory allocated for the key and value when the insertion is failed.
-
由 Mel Kiyama 提交于
missed in earlier update.
-
由 Andreas Scherbaum 提交于
* Make SPI work with 64 bit counters Fix GET DIAGNOSTICS Remove the earlier introduced SPI_processed64 variable This includes the following upstream patches: https://github.com/greenplum-db/gpdb/commit/23a27b039d94ba359286694831eafe03cd970eef https://github.com/greenplum-db/gpdb/commit/f3f3aae4b7841f4dc51129691a7404a03eb55449 https://github.com/greenplum-db/gpdb/commit/ab737f6ba9fc0a26d32a95b115d5cd0e24a63191 The https://github.com/greenplum-db/gpdb/commit/74a379b984d4df91acec2436a16c51caee3526af is not yet included, because repalloc_huge() is not yet backported.
-
由 Shreedhar Hardikar 提交于
The original name was deceptive because this check is also done for QE slices that run on master. For example: EXPLAIN SELECT * FROM func1_nosql_vol(5), foo; QUERY PLAN -------------------------------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) (cost=0.30..1.37 rows=4 width=12) -> Nested Loop (cost=0.30..1.37 rows=2 width=12) -> Seq Scan on foo (cost=0.00..1.01 rows=1 width=8) -> Materialize (cost=0.30..0.33 rows=1 width=4) -> Broadcast Motion 1:3 (slice1) (cost=0.00..0.30 rows=3 width=4) -> Function Scan on func1_nosql_vol (cost=0.00..0.26 rows=1 width=4) Settings: optimizer=off Optimizer status: legacy query optimizer (8 rows) Note that in the plan, the function func1_nosql_vol() will be executed on a master slice with Gp_role as GP_ROLE_EXECUTE. Also, update output files Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Shreedhar Hardikar 提交于
We don't want to use the optimizer for planning queries in SQL, pl/pgSQL etc. functions when that is done on the segments. ORCA excels in complex queries, most of which will access distributed tables. We can't run such queries from the segments slices anyway because they require dispatching a query within another - which is not allowed in GPDB. Note that this restriction also applies to non-QD master slices. Furthermore, ORCA doesn't currently support pl/* statements (relevant when they are planned on the segments). For these reasons, restrict to using ORCA on the master QD processes only. Also revert commit d79a2c7f ("Fix pipeline failures caused by 0dfd0ebc.") and separate out gporca fault injector tests in newly added gporca_faults.sql so that the rest can run in a parallel group. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Sambitesh Dash 提交于
Prior to the postgres window merge, `aggdistinct` in `AggRef` was a bool and passed as is to GPORCA. Now, this is a list of `SortGroupClause` objects for `Aggref`. This commit, translates `aggdistinct` into a bool to pass to ORCA and while translating from DXL to PLNSTMT, a list of `SortGroupClause` is created from the corresponding args in `AggRef`. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Heikki Linnakangas 提交于
It was missing ALTER TYPE RENAME TO, which we got as part of the 8.4 merge earlier. In the passing, remove some GPDB-added paragraphs from the SGML docs. The SGML docs are currently not used for anything else than creating the \h help text, so the extra explanations there will do nothing but create merge conflicts.
-
由 Bhuvnesh Chaudhary 提交于
Unittests should be run after make install, as they require timezone files under `<gpdb>/share/postgresql/timezone` directory which is placed by `make install`. This reordering is required after `8ce6f34a` commit. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Lisa Owen 提交于
* docs - updates for gphdfs jar file changes * updates include: - add note that default value gphd-1.1. is not supported - remove references to Pivotal and Greenplum HD
-
由 David Yozie 提交于
-
- 12 12月, 2017 19 次提交
-
-
由 Heikki Linnakangas 提交于
This was left out during the merge, because when we started merging this, we used to track the amount of memory used by the hash table differently. It was changed back the way it is in the upstream in commit f4d60673, before this code was merged in, but I forgot to restore these calls. Also re-enable increasing # of batches in hash join, when adding to skew bucket.
-
由 Heikki Linnakangas 提交于
-
由 Daniel Gustafsson 提交于
These error codes were marked as deprecated in September 2007 but the code didn't get the memo. Extend the deprecation into the code and actually replace the usage. Ten years seems long enough notice so also remove the renames, the odds of anyone using these in code which compiles against a 6X tree should be low (and easily fixed).
-
由 Daniel Gustafsson 提交于
-
由 Heikki Linnakangas 提交于
It was added as a performance optimization back in 2007, but it's not doing much at the moment. Some quick testing on my laptop suggests that it's a wash at best, and makes the join slightly slower in some scenarios. This will reduce our diff vs upstream, reducing the chance of merge conflicts in the future.
-
-
由 Jimmy Yih 提交于
-
由 Jialun 提交于
Update grep keywords to filter the unrelated program.
-
由 Xiaoran Wang 提交于
commit: commit 95c238d9 Author: Andrew Dunstan <andrew@dunslane.net> Date: Sat Mar 8 01:16:26 2008 +0000 Improve efficiency of attribute scanning in CopyReadAttributesCSV. The loop is split into two parts, inside quotes, and outside quotes, saving some instructions in both parts. Author: Max Yang <myang@pivotal.io>
-
由 Xin Zhang 提交于
There is a racing condition on the postmaster that the first request to start the XLOG streaming (RequestXLogStreaming()) is ignored because the WalReceiverPID wasn't yet reset to zero by do_reaper(). At worst case, it will take 10 seconds to restart a failed wal receiver. To work around issue, we increase the timeout to 20 seconds to ensure the wal receiver is properly restarted. Author: Xin Zhang <xzhang@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Ashwin Agrawal 提交于
Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Xin Zhang 提交于
Based on the commit: commit 49a33602 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Wed Jul 12 15:30:52 2017 +0300 Fix ordering of operations in SyncRepWakeQueue to avoid assertion failure. We fix the similar racing condition happened in the same function. Flip the order of syncRepState setting and dequeue fixed the issue. We are not be able to cherry-pick the original commit since it's in 9.5 and uses pg_read_barrier() and pg_write_barrier(). Discussion: https://www.postgresql.org/message-id/flat/CAEepm%3D3k%3DgTAn5%3DX_Qv%3DhWw9JnUxUMXCzBxTKPaHHXxKkF0%2Biw%40mail.gmail.com#CAEepm=3k=gTAn5=X_Qv=hWw9JnUxUMXCzBxTKPaHHXxKkF0+iw@mail.gmail.com Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Heikki Linnakangas 提交于
The check was introduced by the 8.4 merge, but I had disabled it because we were tripping it. I re-enabled it in commit 1df4698a, thinking that the other changes in that commit made it work again, but we're seeing pipeline failures on many filerep-related test suites because of this. So disable it again. We really shouldn't hit that sanity check, but the current plan is to revamp this code greatly before the next release, as we are about to start replacing the GPDB-specific file replication with upstream WAL replication. Chances are that it will start to just work once that work is done, so I'm not going to spend any more time investigating this right now. Analysis by Jacob Champion.
-
-
由 Heikki Linnakangas 提交于
We should figure out how to make gp_replica_check more robust, so that it doesn't need the aggressive restartpointing. Is there any way to signal the mirror to do a restartpoint before doing the comparisons? But for now, this at least silences the failure, so that we can move on.
-
由 Heikki Linnakangas 提交于
These fields and code were in different order than in upstream. Move to where these are in PostgreSQL 8.4 / 9.0, to reduce our diff. This also re-enables one assertion that was added in the upstream in 8.4, but was disabled in the merge. Adding the LocalSetXLogInsertAllowed() call to before rm_cleanup()s made that assertion work again. We had missed that in the merge.
-
由 Heikki Linnakangas 提交于
The XLOG_BACKUP_END record type was added in the 8.4 merge.
-
由 Ashwin Agrawal 提交于
-
由 Jacob Champion 提交于
initBitmapState doesn't do anything anymore; its logic has effectively been moved to other areas of the code.
-
- 11 12月, 2017 4 次提交
-
-
由 Daniel Gustafsson 提交于
The AC_INCLUDES_DEFAULT macro expands to a text for the preprocessor to include the standard C headers, just running it bare renders an error without checking anything. The macro for actually testing the standard includes, AC_HEADER_STDC, is deprecated since all standard autoconf test macros use AC_INCLUDES_DEFAULT. And if we really find ourselves on a system without the C89 headers then testing for them specifically wont help us much since lots of other tests will fail as well (probably before this one would have a chance to run). In removing this, the code for checking ORCA headers with C++ was injected before testing C headers which caused errors later when checking types. Fix by moving the ORCA checks last, and while doing so also move library checks to the correct section of the autoconf script. This also performs minor copyediting on header file check error messages which were adjacent to the affected code.
-
由 Daniel Gustafsson 提交于
While there is a target in gpAux/extensions for building orafuncs, we might as well allow it to be built without PGXS as well which is handy for quick compile tests. Set the correct path to the builddir such that Makefile dependencies can be picked up.
-
由 Daniel Gustafsson 提交于
The GPTest.pm file is symlinked as a requirement for gpdiff, add to ignore.
-
由 Heikki Linnakangas 提交于
Merge up to commit '4d53a2f9' into GPDB. That is the point where REL8_4_STABLE was branched in the upstraem, and 9.0 development began. Notable changes: * Column-level privileges. Some new code was needed to copy column-level privileges to new partition at ALTER TABLE ADD PARTITION. * Pre-fetching in bitmap heap scans. There is no API to do prefetching of AO tables, so no prefetching on bitmap AO table scans. I'm not sure how feasible or efficient prefetching would be with the AO file format, but that's something that should perhaps be investigated. * Start background writer during archive recovery. In GPDB, there is no archive recovery as such, because we don't do WAL archiving. But the code is there. In GPDB, also launch the checkpointer process; in PostgreSQL the bgwriter process is also responsible for checkpointing (for now; that changes in a later PG release). And we launch those processes before startup pass 3 (the division of startup process into passes is GPDB-specific). * Add "skew" optimization to hash joins. This keeps the most frequent values in the join in memory throughout all join passes. This makes multi-pass hash joins cheaper, when there is a lot of skew. Author: Jacob Champion <pchampion@pivotal.io> Author: Daniel Gustafsson <dgustafsson@pivotal.io> Author: Max Yang <myang@pivotal.io> Author: Xiaoran Wang <xiwang@pivotal.io>
-
- 10 12月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
gpcheckcat now knows how to drop orhpaned temp toast schemas, in addition to temp schemas. Fix the expected output, since the test now reports to drop 2 orphaned schemas instead of 1 (the temp schema, and temp toast schema), instead of 1.
-
由 Heikki Linnakangas 提交于
There was a mechanism to keep the PID of the background writer process in shared memory, but it was unused. Remove.
-