- 14 12月, 2017 11 次提交
-
-
由 Peifeng Qiu 提交于
pg_query function is the underlying workhorse for db.query in python. For INSERT queries, it will return a string containing the number of rows successfully inserted. PQcmdTuples() parses a PGresult return by PQExec, if it's an insert count result, return a pointer to the count. However this pointer is the internal buffer of PGresult, it shouldn't be used after PQClear(), although most time its content remain accessible and unchanged. PyString_FromString will make a copy of the string, so move PQClear() after PyString_FromString is safe. This will fix the problem that gpload get a unprintable insert count sometimes.
-
由 Max Yang 提交于
possible memory leak. Author: Xiaoran Wang <xiwang@pivotal.io>
-
由 Max Yang 提交于
as same as upstream. In upstream, internal_bpchar_pattern_compare compare inputs by ignoring ending space. But GPDB it just use whole string compare. The bug didn't appear because the before merging PG_MERGE_84 GPDB just use TableScan when executing following query, but after PG_MERGE_84, IndexScan is used, and internal_bpchar_pattern_compare will be used for index: create table tbl(id int4, v char(10)); create index tbl_v_idx_bpchar on tbl using btree(v bpchar_pattern_ops); insert into tbl values (1, 'abc'); explain select * from tbl where v = 'abc '::char(20); select * from tbl where v = 'abc '::char(20); Author: Xiaoran Wang <xiwang@pivotal.io>
-
由 Xiaoran Wang 提交于
The parameter namespace passed to transformRelOptions routine has only tow values. One is 'toast',the other one is NULL. 'toast' value is to filter toast reloptions. NULL value is to get no-toast reloptions. In transformStorageEncodingClause routine just need to get no-toast reloption. Author: Max Yang <myang@pivotal.io>
-
由 Xin Zhang 提交于
Before we shutdown the mirror, we create a checkpoint and also do an empty transaction to ensure the restart point is created on mirror. This will speedup the mirror recovery because we don't have to reply all the xlog records from the beginning. Author: Xin Zhang <xzhang@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Taylor Vesely 提交于
In previous commit c9e2693c, the control file will be updated on the mirror when restart point is created. However, different from upstream, GPDB is running mirror under DB_IN_STANDY_MODE rather than DB_IN_ARCHIVE_RECOVERY mode in upstream. Hence, the control file was never updated when creating restart point. Author: Xin Zhang <xzhang@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Xin Zhang 提交于
Also remove the redundant fsync=off from related pipeline files. It can be overriden with BLDWRAP_POSTGRES_CONF_ADDONS. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Shreedhar Hardikar 提交于
-
由 Lisa Owen 提交于
-
由 Ashwin Agrawal 提交于
This code is hidden under GUC and never turned on, so no point keeping it. Was coded in past due to some inconsistency issues which have not surfaced from long time now. Plus, anyways PT needs to go away soon as well.
-
- 13 12月, 2017 16 次提交
-
-
由 Daniel Gustafsson 提交于
The comment added in 916f460f created a nested comment structure by accident, which triggered a warning in clang for -Wcomment. Reword the comment slightly to make the compiler happy. planner.c:194:15: warning: '/*' within block comment [-Wcomment] * support pl/* statements (relevant when they are planned on the segments). ^
-
由 Chuck Litzell 提交于
-
由 Ning Yu 提交于
Signed-off-by: NZhenghua Lyu <zlv@pivotal.io> Signed-off-by: NRichard Guo <riguo@pivotal.io>
-
由 Ning Yu 提交于
Signed-off-by: NZhenghua Lyu <zlv@pivotal.io> Signed-off-by: NRichard Guo <riguo@pivotal.io>
-
由 Zhenghua Lyu 提交于
Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Shreedhar Hardikar 提交于
The default value of Gp_role is set to GP_ROLE_DISPATCH. Which means auxiliary processes inherit this value. FileRep does the same, but also executes queries using SPI on the segment. Which means Gp_role == GP_ROLE_DISPATCH is not a sufficient check for master QD. So, bring back the check on GpIdentity. Author: Asim R P <apraveen@pivotal.io> Author: Shreedhar Hardikar <shardikar@pivotal.io>
-
When translating the query to DXL, Orca maps the attno to column id in a hash map. The hashmap failed to insert values for key that is already in the hashmap. Orca translator will fail the assertion when the key value pair is not inserted successfully. When gpdb is build in non-debug mode, we don't see this Orca falling back and it produces the correct plan. So it is OK to remove assertion and release the memory allocated for the key and value when the insertion is failed.
-
由 Mel Kiyama 提交于
missed in earlier update.
-
由 Andreas Scherbaum 提交于
* Make SPI work with 64 bit counters Fix GET DIAGNOSTICS Remove the earlier introduced SPI_processed64 variable This includes the following upstream patches: https://github.com/greenplum-db/gpdb/commit/23a27b039d94ba359286694831eafe03cd970eef https://github.com/greenplum-db/gpdb/commit/f3f3aae4b7841f4dc51129691a7404a03eb55449 https://github.com/greenplum-db/gpdb/commit/ab737f6ba9fc0a26d32a95b115d5cd0e24a63191 The https://github.com/greenplum-db/gpdb/commit/74a379b984d4df91acec2436a16c51caee3526af is not yet included, because repalloc_huge() is not yet backported.
-
由 Shreedhar Hardikar 提交于
The original name was deceptive because this check is also done for QE slices that run on master. For example: EXPLAIN SELECT * FROM func1_nosql_vol(5), foo; QUERY PLAN -------------------------------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) (cost=0.30..1.37 rows=4 width=12) -> Nested Loop (cost=0.30..1.37 rows=2 width=12) -> Seq Scan on foo (cost=0.00..1.01 rows=1 width=8) -> Materialize (cost=0.30..0.33 rows=1 width=4) -> Broadcast Motion 1:3 (slice1) (cost=0.00..0.30 rows=3 width=4) -> Function Scan on func1_nosql_vol (cost=0.00..0.26 rows=1 width=4) Settings: optimizer=off Optimizer status: legacy query optimizer (8 rows) Note that in the plan, the function func1_nosql_vol() will be executed on a master slice with Gp_role as GP_ROLE_EXECUTE. Also, update output files Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Shreedhar Hardikar 提交于
We don't want to use the optimizer for planning queries in SQL, pl/pgSQL etc. functions when that is done on the segments. ORCA excels in complex queries, most of which will access distributed tables. We can't run such queries from the segments slices anyway because they require dispatching a query within another - which is not allowed in GPDB. Note that this restriction also applies to non-QD master slices. Furthermore, ORCA doesn't currently support pl/* statements (relevant when they are planned on the segments). For these reasons, restrict to using ORCA on the master QD processes only. Also revert commit d79a2c7f ("Fix pipeline failures caused by 0dfd0ebc.") and separate out gporca fault injector tests in newly added gporca_faults.sql so that the rest can run in a parallel group. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Sambitesh Dash 提交于
Prior to the postgres window merge, `aggdistinct` in `AggRef` was a bool and passed as is to GPORCA. Now, this is a list of `SortGroupClause` objects for `Aggref`. This commit, translates `aggdistinct` into a bool to pass to ORCA and while translating from DXL to PLNSTMT, a list of `SortGroupClause` is created from the corresponding args in `AggRef`. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Heikki Linnakangas 提交于
It was missing ALTER TYPE RENAME TO, which we got as part of the 8.4 merge earlier. In the passing, remove some GPDB-added paragraphs from the SGML docs. The SGML docs are currently not used for anything else than creating the \h help text, so the extra explanations there will do nothing but create merge conflicts.
-
由 Bhuvnesh Chaudhary 提交于
Unittests should be run after make install, as they require timezone files under `<gpdb>/share/postgresql/timezone` directory which is placed by `make install`. This reordering is required after `8ce6f34a` commit. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Lisa Owen 提交于
* docs - updates for gphdfs jar file changes * updates include: - add note that default value gphd-1.1. is not supported - remove references to Pivotal and Greenplum HD
-
由 David Yozie 提交于
-
- 12 12月, 2017 13 次提交
-
-
由 Heikki Linnakangas 提交于
This was left out during the merge, because when we started merging this, we used to track the amount of memory used by the hash table differently. It was changed back the way it is in the upstream in commit f4d60673, before this code was merged in, but I forgot to restore these calls. Also re-enable increasing # of batches in hash join, when adding to skew bucket.
-
由 Heikki Linnakangas 提交于
-
由 Daniel Gustafsson 提交于
These error codes were marked as deprecated in September 2007 but the code didn't get the memo. Extend the deprecation into the code and actually replace the usage. Ten years seems long enough notice so also remove the renames, the odds of anyone using these in code which compiles against a 6X tree should be low (and easily fixed).
-
由 Daniel Gustafsson 提交于
-
由 Heikki Linnakangas 提交于
It was added as a performance optimization back in 2007, but it's not doing much at the moment. Some quick testing on my laptop suggests that it's a wash at best, and makes the join slightly slower in some scenarios. This will reduce our diff vs upstream, reducing the chance of merge conflicts in the future.
-
-
由 Jimmy Yih 提交于
-
由 Jialun 提交于
Update grep keywords to filter the unrelated program.
-
由 Xiaoran Wang 提交于
commit: commit 95c238d9 Author: Andrew Dunstan <andrew@dunslane.net> Date: Sat Mar 8 01:16:26 2008 +0000 Improve efficiency of attribute scanning in CopyReadAttributesCSV. The loop is split into two parts, inside quotes, and outside quotes, saving some instructions in both parts. Author: Max Yang <myang@pivotal.io>
-
由 Xin Zhang 提交于
There is a racing condition on the postmaster that the first request to start the XLOG streaming (RequestXLogStreaming()) is ignored because the WalReceiverPID wasn't yet reset to zero by do_reaper(). At worst case, it will take 10 seconds to restart a failed wal receiver. To work around issue, we increase the timeout to 20 seconds to ensure the wal receiver is properly restarted. Author: Xin Zhang <xzhang@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Ashwin Agrawal 提交于
Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Xin Zhang 提交于
Based on the commit: commit 49a33602 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Wed Jul 12 15:30:52 2017 +0300 Fix ordering of operations in SyncRepWakeQueue to avoid assertion failure. We fix the similar racing condition happened in the same function. Flip the order of syncRepState setting and dequeue fixed the issue. We are not be able to cherry-pick the original commit since it's in 9.5 and uses pg_read_barrier() and pg_write_barrier(). Discussion: https://www.postgresql.org/message-id/flat/CAEepm%3D3k%3DgTAn5%3DX_Qv%3DhWw9JnUxUMXCzBxTKPaHHXxKkF0%2Biw%40mail.gmail.com#CAEepm=3k=gTAn5=X_Qv=hWw9JnUxUMXCzBxTKPaHHXxKkF0+iw@mail.gmail.com Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Heikki Linnakangas 提交于
The check was introduced by the 8.4 merge, but I had disabled it because we were tripping it. I re-enabled it in commit 1df4698a, thinking that the other changes in that commit made it work again, but we're seeing pipeline failures on many filerep-related test suites because of this. So disable it again. We really shouldn't hit that sanity check, but the current plan is to revamp this code greatly before the next release, as we are about to start replacing the GPDB-specific file replication with upstream WAL replication. Chances are that it will start to just work once that work is done, so I'm not going to spend any more time investigating this right now. Analysis by Jacob Champion.
-