- 14 5月, 2019 4 次提交
-
-
由 Chuck Litzell 提交于
* Source for the segment recovery flowchart * Rename gprecovery-flow-chart.graffle to recovermatrix.graffle to match png filename
-
由 Lisa Owen 提交于
* docs - add info about pxf jdbc statement properties * misc edits requested by david
-
由 Daniel Gustafsson 提交于
The utilities reference page contained (unlinked) mentions of long since deprecated utilities, and since they aren't shipping it's time to remove them: gpsizecalc - removed in June 2010 gpskew - removed in October 2009 gprebuildsystem - removed in January 2010 gpchecknet - removed in April 2010 gpcheckos - removed in July 2016 Reviewed-by: Lisa Owen Reviewed-by: David Yozie
-
由 Daniel Gustafsson 提交于
gpdetective was removed in 4.3.5, so it seems about time to also remove the documentation for it (which was unreachable due to the app being marked deprecated). Reviewed-by: Lisa Owen Reviewed-by: David Yozie
-
- 13 5月, 2019 2 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
- 11 5月, 2019 3 次提交
-
-
由 Venkatesh Raghavan 提交于
-
由 Lisa Owen 提交于
* docs - add pageinspect module landing page * remove peer scope * cannot inspect ao or external relations
-
由 Jesse Zhang 提交于
The "combine" function for the int4 sum/avg aggregate functions is backported to Greenplum 6.0 in commit 313cef6e (from postgres/postgres@11c8669c0cc) but we made the inevitable omission of setting the "strictness" of int4_avg_combine to false. This by itself is harmless as the actual function body *does* guard against NULL input, but it also prohibits a whole host of optimizations when the executor and planner can detect NULL input early on and short-circuit the execution. Oops. This patch flips the `proisstrict` flag back to true for int4_avg_combine. Backpatch to 6X_STABLE. (cherry picked from commit c334bedc)
-
- 10 5月, 2019 5 次提交
-
-
由 Pengzhou Tang 提交于
In pg_get_expr(), after getting the relname, if the table that the relid tells is dropped, an error will raise later when opening the relation to get column names. pg_get_expr() is used by GPDB add-on view 'pg_partitions' which is widely used by regression tests for partition tables. Lots of parallel test cases issue view pg_partitions and drop partition tables concurrently, so those cases are very flaky. Serialize test cases will cost more testing time and be fragile, so GPDB holds a AccessShareLock here to make tests stable.
-
由 Lisa Owen 提交于
-
由 Sambitesh Dash 提交于
optimizer_enable_dml is set to true by default. When set to false, ORCA will fall back to planner for all DML queries.
-
由 Lisa Owen 提交于
* docs - add to pxf upgrade procedure (v5.0.1 to v5.3.2) * remove extraneous to
-
由 Daniel Gustafsson 提交于
When referring to the product name and not a database instance running Greenplum, the capitalization should be "Greenplum Database".
-
- 09 5月, 2019 13 次提交
-
-
由 Peifeng Qiu 提交于
In the past several weeks, gpload test case 22 failed two times. It may report: Fatal Python error: GC object already tracked. We have reproduced this issue locally and in dev pipeline for 3 days, but it can't be reproduced. So we decide to disable it now, in order to not blocking other PRs.
-
由 Pengzhou Tang 提交于
Assume user1 has the privilege to database db1 and user2 has not, when user1 try to create a schema in db1 and authorize it to user2, a permission denied error is reported in QE. The RCA is QD set current user to user2 before dispatching the query to QEs, so QE will also set current user to user2, however, user2 has no provilege to create schema in database db1. To fix this, we delay setting the current user to user2 until the query is dispatched to QEs.
-
由 Sambitesh Dash 提交于
-
由 Pengzhou Tang 提交于
GPDB used to allow command like "START_REPLICATION %X/%X [SYNC]" to start a replication, user can specify SYNC option to skip waiting synchronous in replications. Now start replication command is made similar to upstream, the SYNC option is not supported, however, the internal flag "synchronous" is still used and always be false which make master and standby never synchronized.
-
由 Lisa Owen 提交于
* docs - enhance pxf jdbc partitioning content * add missing comma * simplify some content
-
由 Lisa Owen 提交于
* docs - clarify pxf filter partitioning support for hive * clarify the hadoop cfg update content * remove cluster start * edits requested by david * remove disable statement per shivram
-
由 Lisa Owen 提交于
* docs - add pxf jdbc.connection.transactionIsolation server cfg property * use read uncommitted in example; add external
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - jdbc cfg supports connection- and session-level properties * some edits requested by david * reword jdbc server cfg opening paragraph * clarify rejected session prop/value chars as requested by ivan
-
由 Lisa Owen 提交于
* docs - discuss pxf fragment metadata caching * a large number of
-
由 Lisa Owen 提交于
* docs - pxf init/sync support to master standby * edits requested by david * edits requested by francisco and oliver * pxf sync from master TO standby or seg host * identify sync run on master in pxf sync option description
-
由 Karen Huddleston 提交于
Some of the task files were used by jobs in the orca pipeline, but those jobs have been removed so the files are not being used anymore. Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 David Yozie 提交于
-
- 08 5月, 2019 6 次提交
-
-
由 Ning Yu 提交于
A motion hazard is a deadlock between motions, a classic motion hazard in a join executor is formed by its inner and outer motions, it can be prevented by prefetching the inner plan, refer to motion_sanity_check() for details. A similar motion hazard can be formed by the outer motion and the join qual motion. A join executor fetches a outer tuple, filters it with the join qual, then repeat the process on all the outer tuples. When there are motions in both outer plan and the join qual then below state is possible: 0. processes A and B belong to the join slice, process C belongs to the outer slice, process D belongs to the JoinQual slice; 1. A has read the first outer tuple and is fetching tuples from D; 2. D is waiting for ACK from B; 3. B is fetching the first outer tuple from C; 4. C is waiting for ACK from A; So a deadlock is formed A->D->B->C->A. We can prevent it also by prefetching the join qual. Reviewed-by: NJesse Zhang <jzhang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io> Reviewed-by: NZhenghua Lyu <zlv@pivotal.io> (cherry picked from commit fa762b69)
-
由 David Yozie 提交于
* Docs: fixing several broken links * Docs: fixing several broken links * Adding info about GPORCA support notice * add missing comma
-
由 David Sharp 提交于
Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> (cherry picked from commit 8af57a90)
-
由 David Sharp 提交于
This will move us towards dropping Ubuntu 16.04 in preparation for adding an Ubuntu 18.04 build, as well as removing remaining uses of Conan. 5X_STABLE and master have diverged enough that this pipeline can't build 5X_STABLE PRs. Add "base_branch" filter (newly added to the github-pr-resource) to build only PRs against master. Authored-by: NDavid Sharp <dsharp@pivotal.io> (cherry picked from commit 4f3d158e) (cherry picked from commit 76e7b822)
-
由 Adam Berlin 提交于
There was a concern that an exception during GetNonHistoricCatalogSnapshot would be problematic after setting the global variable and not resetting it back to its original value. This patch threads the desired distributed transaction context into GetNonHistoricCatalogSnapshot without modifying global state.
-
由 Adam Berlin 提交于
No longer rely on a global variable to determine the distributed snapshot context.
-
- 07 5月, 2019 4 次提交
-
-
由 Lisa Owen 提交于
* docs - address lock exhaustion shared mem error msg o * capitalize Out in title
-
由 Paul Guo 提交于
Recursively create tablespace directories if they do not exist but we need them when re-redoing some tablespace related xlogs (e.g. database create with a tablespace) on mirror. It is observed many time that gp_replica_check test fails because some mirror nodes can not be brought up before testing recently. The related log looks like this: 2019-04-17 14:52:14.951 CST [23030] FATAL: could not create directory "pg_tblspc/65546/PG_12_201904072/65547": No such file or directory 2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for Database/CREATE: copy dir 1663/1 to 65546/65547 That is because some mirror nodes can not be recovered after previous testing, not due to gp_replica_check itself. The root cause is that tablespace recovery related. Pengzhou Tang and Hao Wu digged that intially and kindly found a mini repro as below. run on shell: rm -rf /tmp/some_isolation2_pg_basebackup_tablespace mkdir -p /tmp/some_isolation2_pg_basebackup_tablespace copy and run the below sql on psql client: drop tablespace if exists some_isolation2_pg_basebackup_tablespace; create tablespace some_isolation2_pg_basebackup_tablespace location '/tmp/some_isolation2_pg_basebackup_tablespace'; \!gpstop -ra -M fast; drop database if exists some_database_with_tablespace; create database some_database_with_tablespace tablespace some_isolation2_pg_basebackup_tablespace; drop database some_database_with_tablespace; drop tablespace some_isolation2_pg_basebackup_tablespace; \!gpstop -ra -M immediate; The root cause is on mirror after drop database & drop tablespace, 'immediate' stop causes the pg_control file not up-to-date with latest redo start lsn (this is allowed), when the node restarts, it re-redoes 'create database some_database_with_tablespace tablespace some_isolation2_pg_basebackup_tablespace' but the tablespace directories have been deleted in previous redoing. The 'could not create directory' error could happen on re-redoing create table in a tablespace also. We've seen this case on the ci environment, but that is because missing of a get_parent_directory() call in the 'create two parents' code block in TablespaceCreateDbspace(). Changing it to a simpler call pg_mkdir_p() instead. Also it seems that the src_path could be missing also in dbase_redo() for the example below. For example re-redoing at the alter step since tbs1 directory is deleted in later 'drop tablespace tbs1'. alter database db1 set tablespace tbs2; drop tablespace tbs1; There is discussion on upstream about this, https://www.postgresql.org/message-id/flat/CAEET0ZGx9AvioViLf7nbR_8tH9-%3D27DN5xWJ2P9-ROH16e4JUA%40mail.gmail.com In this patch I recreate those directories to avoid this error. Other solutions include ignoring the directory-not-existing error or forcing a flush when redoing those kind of checkpoint xlogs which are added normally in drop database, etc. Let's revert or update the code change after the solution is finalized on upstream.
-
由 Jamie McAtamney 提交于
Now that the enterprise version of GPDB is only provided via RPM, including gpseginstall in the distribution would cause conflicts if users try to install GPDB with RPMs and with gpseginstall on the same cluster. While it could be preserved for use by the OSS community, there are several standard tools for copying GPDB files to segment hosts in a cluster, and recommendations for using one or more of those tools will be included in the GPDB documentation. In addition to removing gpseginstall itself, this commit removes references to it in other utilities' documentation and removes code in gppylib that was only called by gpseginstall. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> (cherry picked from commit 64014685)
-
由 Bhuvnesh Chaudhary 提交于
-
- 05 5月, 2019 1 次提交
-
-
由 Adam Lee 提交于
CI reported some read failures just after the write action, it's probably because the write action is not flushed yet by AWS. Sleep and retry if AWS returns "NoSuchKey" error. Update the workflow but not the test cases because users might have the same issue. Co-authored-by: NPeifeng Qiu <pqiu@pivotal.io>
-
- 04 5月, 2019 1 次提交
-
-
由 David Yozie 提交于
-
- 03 5月, 2019 1 次提交
-
-
由 Shreedhar Hardikar 提交于
Includes ICG changes for ORCA commit: 'Convert FULL OUTER JOIN to LEFT JOIN when possible'
-