- 19 12月, 2017 2 次提交
-
-
由 Mel Kiyama 提交于
PR for 5X_STABLE Will be ported to MAIN
-
由 Lisa Owen 提交于
* docs - costing diffs between gporca/planner and RQ limits * mention fallback * RQs do not align/differentiate costs between planners
-
- 18 12月, 2017 2 次提交
-
-
由 Chuck Litzell 提交于
* Enhance hardening docs on trust and ident * Format source. No content changes.
-
由 Lav Jain 提交于
-
- 16 12月, 2017 5 次提交
-
-
由 Marbin Tan 提交于
This is simply a setup/cleanup step for the behave tests, so be accomodating to try to get it to work. Scope: affects gpcheckcat.feature and backups.feature; these tests already have some timing affordances; this just adds a bit more backstop Author: Marbin Tan <mtan@pivotal.io> Author: C.J. Jameson <cjameson@pivotal.io>
-
由 Jimmy Yih 提交于
Synchronized replication is on by default for Greenplum. When the primary's mirror is detected down, the primary will continue to block all commits until the mirror comes back up. This commit introduces a new FTS message that will be sent to the primary if an FTS probe detects a mirror is down and primary is stuck in synchronous replication state. The new message will allow the primary to turn off synchronous replication by setting the GUC synchronous_standby_names to empty string (persisted in gp_replication.conf) and setting the WalSndCtl shared-memory syncrep state. As a result, backends that may be waiting for the mirror to receive a commit will be unblocked. FTS must note mirror's status as down in configuration before syncrep can be turned off by the primary. Author: Jimmy Yih <jyih@pivotal.io> Author: Asim R P <apraveen@pivotal.io> Author: Jacob Champion <pchampion@pivotal.io>
-
由 Jimmy Yih 提交于
Along with rename, fix relative unit tests. Author: Jimmy Yih <jyih@pivotal.io> Author: Asim R P <apraveen@pivotal.io>
-
由 Lav Jain 提交于
-
由 Lav Jain 提交于
* Cleanup makefiles for GPHDFS * Fix HADOOP_TARGET_VERSION * Change gphdfs_target_version tokens to hadoop, cdh, hdp, mpr
-
- 15 12月, 2017 5 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Michael Roth 提交于
* Stiwching the chown to a more overlay friendly chmod on directories - Initial work is to remove the chown from the gpadmin user setup and replace it with a chmod a+w to the directories. This is sufficient for ICW, TINC and behave to run in most cases. - Gpload2 needs the datafile to be owned by gpadmin - Change to gpcloud as it was chowning the full directory - PXF tests needed to be able to write to the pxf_automation_src directory. Updated tests to set directories world writable instead of recusiely chowning. Singlenode needs to be owned by gpadmin. TODO: Change gpload2 to no longer need datafile to be owned by gpadmin TODO: clean up singlenode owndership for PXF test
-
由 Daniel Gustafsson 提交于
Commit 94eacb66 backported pg_attribute_printf() but missed the part which defines PG_PRINTF_ATTRIBUTE. Since the function wasn't used in the backport the missing declaration didn't yield any compiler warnings. The upcoming update of src/timezone to track the current PostgreSQL version will however use this and cause warnings. This is a partial backport of the below commit from PostgreSQL 9.5. One hunk of the referenced commit was a fix for a previous attempt which was skipped since we never merged that fix (yet). commit b779168f Author: Noah Misch <noah@leadboat.com> Date: Sun Nov 23 09:34:03 2014 -0500 Detect PG_PRINTF_ATTRIBUTE automatically. This eliminates gobs of "unrecognized format function type" warnings under MinGW compilers predating GCC 4.4.
-
由 Marbin Tan 提交于
Ensure that we're triggering the `gpfaultinjector`. There are cases where even though we have the `gpfaultinjector` setup and the transaction still does not block properly. By creating a database, we ensure that all segments gets contacted, and FTS will detect the issue that we created with gpfaultinjector.
-
由 Xin Zhang 提交于
This will protect regression on walrep features on master branch.
-
- 14 12月, 2017 13 次提交
-
-
由 Tingfang Bao 提交于
This is to make gptransfer able to transfer only schema of databases or tables, like "--schema-only -d foo" or "--schema-only -t bar.public.t1". It could do that before actually but forgot to set the success flag. Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Pengzhou Tang 提交于
When declaring a cursor, QD will not block until QE got a snapshot and set up a interconnect, so we can not guarantee that QD always see 'qe_got_snapshot_and_interconnect' was triggered. For the case itself, if the writer QE can get slower than the reader QE without the help of fault injector, it's even better.
-
由 Peifeng Qiu 提交于
pg_query function is the underlying workhorse for db.query in python. For INSERT queries, it will return a string containing the number of rows successfully inserted. PQcmdTuples() parses a PGresult return by PQExec, if it's an insert count result, return a pointer to the count. However this pointer is the internal buffer of PGresult, it shouldn't be used after PQClear(), although most time its content remain accessible and unchanged. PyString_FromString will make a copy of the string, so move PQClear() after PyString_FromString is safe. This will fix the problem that gpload get a unprintable insert count sometimes.
-
由 Max Yang 提交于
possible memory leak. Author: Xiaoran Wang <xiwang@pivotal.io>
-
由 Max Yang 提交于
as same as upstream. In upstream, internal_bpchar_pattern_compare compare inputs by ignoring ending space. But GPDB it just use whole string compare. The bug didn't appear because the before merging PG_MERGE_84 GPDB just use TableScan when executing following query, but after PG_MERGE_84, IndexScan is used, and internal_bpchar_pattern_compare will be used for index: create table tbl(id int4, v char(10)); create index tbl_v_idx_bpchar on tbl using btree(v bpchar_pattern_ops); insert into tbl values (1, 'abc'); explain select * from tbl where v = 'abc '::char(20); select * from tbl where v = 'abc '::char(20); Author: Xiaoran Wang <xiwang@pivotal.io>
-
由 Xiaoran Wang 提交于
The parameter namespace passed to transformRelOptions routine has only tow values. One is 'toast',the other one is NULL. 'toast' value is to filter toast reloptions. NULL value is to get no-toast reloptions. In transformStorageEncodingClause routine just need to get no-toast reloption. Author: Max Yang <myang@pivotal.io>
-
由 Xin Zhang 提交于
Before we shutdown the mirror, we create a checkpoint and also do an empty transaction to ensure the restart point is created on mirror. This will speedup the mirror recovery because we don't have to reply all the xlog records from the beginning. Author: Xin Zhang <xzhang@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Taylor Vesely 提交于
In previous commit c9e2693c, the control file will be updated on the mirror when restart point is created. However, different from upstream, GPDB is running mirror under DB_IN_STANDY_MODE rather than DB_IN_ARCHIVE_RECOVERY mode in upstream. Hence, the control file was never updated when creating restart point. Author: Xin Zhang <xzhang@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Xin Zhang 提交于
Also remove the redundant fsync=off from related pipeline files. It can be overriden with BLDWRAP_POSTGRES_CONF_ADDONS. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Shreedhar Hardikar 提交于
-
由 Lisa Owen 提交于
-
由 Ashwin Agrawal 提交于
This code is hidden under GUC and never turned on, so no point keeping it. Was coded in past due to some inconsistency issues which have not surfaced from long time now. Plus, anyways PT needs to go away soon as well.
-
- 13 12月, 2017 13 次提交
-
-
由 Daniel Gustafsson 提交于
The comment added in 916f460f created a nested comment structure by accident, which triggered a warning in clang for -Wcomment. Reword the comment slightly to make the compiler happy. planner.c:194:15: warning: '/*' within block comment [-Wcomment] * support pl/* statements (relevant when they are planned on the segments). ^
-
由 Chuck Litzell 提交于
-
由 Ning Yu 提交于
Signed-off-by: NZhenghua Lyu <zlv@pivotal.io> Signed-off-by: NRichard Guo <riguo@pivotal.io>
-
由 Ning Yu 提交于
Signed-off-by: NZhenghua Lyu <zlv@pivotal.io> Signed-off-by: NRichard Guo <riguo@pivotal.io>
-
由 Zhenghua Lyu 提交于
Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Shreedhar Hardikar 提交于
The default value of Gp_role is set to GP_ROLE_DISPATCH. Which means auxiliary processes inherit this value. FileRep does the same, but also executes queries using SPI on the segment. Which means Gp_role == GP_ROLE_DISPATCH is not a sufficient check for master QD. So, bring back the check on GpIdentity. Author: Asim R P <apraveen@pivotal.io> Author: Shreedhar Hardikar <shardikar@pivotal.io>
-
When translating the query to DXL, Orca maps the attno to column id in a hash map. The hashmap failed to insert values for key that is already in the hashmap. Orca translator will fail the assertion when the key value pair is not inserted successfully. When gpdb is build in non-debug mode, we don't see this Orca falling back and it produces the correct plan. So it is OK to remove assertion and release the memory allocated for the key and value when the insertion is failed.
-
由 Mel Kiyama 提交于
missed in earlier update.
-
由 Andreas Scherbaum 提交于
* Make SPI work with 64 bit counters Fix GET DIAGNOSTICS Remove the earlier introduced SPI_processed64 variable This includes the following upstream patches: https://github.com/greenplum-db/gpdb/commit/23a27b039d94ba359286694831eafe03cd970eef https://github.com/greenplum-db/gpdb/commit/f3f3aae4b7841f4dc51129691a7404a03eb55449 https://github.com/greenplum-db/gpdb/commit/ab737f6ba9fc0a26d32a95b115d5cd0e24a63191 The https://github.com/greenplum-db/gpdb/commit/74a379b984d4df91acec2436a16c51caee3526af is not yet included, because repalloc_huge() is not yet backported.
-
由 Shreedhar Hardikar 提交于
The original name was deceptive because this check is also done for QE slices that run on master. For example: EXPLAIN SELECT * FROM func1_nosql_vol(5), foo; QUERY PLAN -------------------------------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) (cost=0.30..1.37 rows=4 width=12) -> Nested Loop (cost=0.30..1.37 rows=2 width=12) -> Seq Scan on foo (cost=0.00..1.01 rows=1 width=8) -> Materialize (cost=0.30..0.33 rows=1 width=4) -> Broadcast Motion 1:3 (slice1) (cost=0.00..0.30 rows=3 width=4) -> Function Scan on func1_nosql_vol (cost=0.00..0.26 rows=1 width=4) Settings: optimizer=off Optimizer status: legacy query optimizer (8 rows) Note that in the plan, the function func1_nosql_vol() will be executed on a master slice with Gp_role as GP_ROLE_EXECUTE. Also, update output files Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Shreedhar Hardikar 提交于
We don't want to use the optimizer for planning queries in SQL, pl/pgSQL etc. functions when that is done on the segments. ORCA excels in complex queries, most of which will access distributed tables. We can't run such queries from the segments slices anyway because they require dispatching a query within another - which is not allowed in GPDB. Note that this restriction also applies to non-QD master slices. Furthermore, ORCA doesn't currently support pl/* statements (relevant when they are planned on the segments). For these reasons, restrict to using ORCA on the master QD processes only. Also revert commit d79a2c7f ("Fix pipeline failures caused by 0dfd0ebc.") and separate out gporca fault injector tests in newly added gporca_faults.sql so that the rest can run in a parallel group. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Sambitesh Dash 提交于
Prior to the postgres window merge, `aggdistinct` in `AggRef` was a bool and passed as is to GPORCA. Now, this is a list of `SortGroupClause` objects for `Aggref`. This commit, translates `aggdistinct` into a bool to pass to ORCA and while translating from DXL to PLNSTMT, a list of `SortGroupClause` is created from the corresponding args in `AggRef`. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Heikki Linnakangas 提交于
It was missing ALTER TYPE RENAME TO, which we got as part of the 8.4 merge earlier. In the passing, remove some GPDB-added paragraphs from the SGML docs. The SGML docs are currently not used for anything else than creating the \h help text, so the extra explanations there will do nothing but create merge conflicts.
-