- 19 7月, 2017 1 次提交
-
-
由 Omer Arap 提交于
If gpdb and orca is build with debugging enabled, there is an assert in orca to check if the upper and lower bound are both closed. `GPOS_ASSERT_IMP(FSingleton(), fLowerClosed && fUpperClosed);` The histogram that is stored in pg_statistics might lead to have a singleton buckets as follows: `10, 20, 20, 30, 40` which will lead to have buckets in this format: `[0,10), [10, 20), [20,20), [20,30), [30,40]` This will cause assert to fail since [20,20) is a singleton bucket but its upper bound is open. With this fix, the generated buckets will look like below: `[0,10], [10,20), [20,20], (20,30), [30,40]` Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
- 18 7月, 2017 7 次提交
-
-
由 Michael Roth 提交于
* initial commit * Updated instructions - Added note to build missing packages
-
由 Andreas Scherbaum 提交于
* Update VACUUM documentation * VACUUM does not remove rows, only marks space for reuse * No daily VACUUM required, but depends on the frequency of changes * VACUUM FULL does not (yet) recreate the table, it rewrites it
-
由 Ming LI 提交于
If there are two external tables refer to the same PIPE file using gpfdist or file protocol directly, concurrent read will result in wrong data format or hang for gpfdist. Now before read the pipe, it will firstly flock the pipe file (Windows not supported yet), other requests from gpdb will report error. Signed-off-by: NMing LI <mli@apache.org> Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Abhijit Subramanya 提交于
Exclude wal sender process backends (which application is `walreceiver`) from accounted as leftover backends. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Xin Zhang 提交于
Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
由 Abhijit Subramanya 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Abhijit Subramanya 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 16 7月, 2017 1 次提交
-
-
由 Roman Shaposhnik 提交于
-
- 15 7月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
* Remove PartOidExpr, it's not used in GPDB. The target lists of DML nodes that ORCA generates includes a column for the target partition OID. It can then be referenced by PartOidExprs. ORCA uses these to allow sorting the tuples by partition, before inserting them to the underlying table. That feature is used by HAWQ, where grouping tuples that go to the same output partition is cheaper. Since commit adfad608, which removed the gp_parquet_insert_sort GUC, we don't do that in GPDB, however. GPDB can hold multiple result relations open at the same time, so there is no performance benefit to grouping the tuples first (or at least not enough benefit to counterbalance the cost of a sort). So remove the now unused support for PartOidExpr in the executor. * Bump ORCA version to 2.37 Signed-off-by: NEkta Khanna <ekhanna@pivotal.io> * Removed acceptedLeaf Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 14 7月, 2017 5 次提交
-
-
由 Andreas Scherbaum 提交于
-
由 Alexandra Wang 提交于
Signed-off-by: NJohn Gaskin <johntgaskin@gmail.com> Signed-off-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Jimmy Yih 提交于
When a standby is shut down and restarted, WAL recovery starts from the last restartpoint. If we replay an AO write record which has a following drop record, the WAL replay of the AO write record will find that the segment file does not exist. To fix this, we piggyback on top of the heap solution of tracking invalid pages in the invalid_page_tab hash table. The hash table key struct uses a block number which, for AO's sake, we pretend is the segment file number for AO/AOCO tables. This solution will be revisited to possibly create a separate hash table for AO/AOCO tables with a proper key struct. Big thanks to Heikki for pointing out the issue.
-
由 Ashwin Agrawal 提交于
We generate AO XLOG records when --enable-segwalrep is configured. We should now replay those records on the mirror or during recovery. The replay is only performed for standby mode since promotion will not execute until after there are no more XLOG records to read from the WAL stream.
-
由 Heikki Linnakangas 提交于
As reported by @flochman. See github issue #2739.
-
- 13 7月, 2017 12 次提交
-
-
由 Heikki Linnakangas 提交于
Seems like a good thing to test. To avoid having to have separate ORCA and non-ORCA expected outputs, change the ORCA error message to match that you get without ORCA.
-
由 Daniel Gustafsson 提交于
This removes code which is either unreachable due to prior identical tests which break the codepath, or which is dead due to always being true. Asserting that an unsigned integer is >= 0 will always be true, so it's pointless. Per "logically dead code" gripes by Coverity
-
由 Jimmy Yih 提交于
When running `gpsegwalrep.py start`, it would intermittently deadlock on the subprocess.check_output call. Apparently, concurrent subprocess.check_output calls can deadlock depending on what shell commands are run and how fast they execute. For now, fix the issue by only calling subprocess.check_output under a thread lock. Someone can revisit this later although it is assumed a proper tool will be created in the near future.
-
由 Abhijit Subramanya 提交于
If we try to inject certain faults when the system is initialized with filerep disabled, we get the following error: ``` gpfaultinjector error: Injection Failed: Failure: could not insert fault injection, segment not in primary or mirror role Failure: could not insert fault injection, segment not in primary or mirror role ``` This patch removes the check for the role for non-filerep faults so that they don't fail on a cluster initialized without filerep.
-
由 Asim R P 提交于
Filerep resync logic to fetch changed blocks from changetracking (CT) log is changed. LSN is no longer used to filter out blocks from CT log. If a relation's changed blocks falls above the threshold number of blocks that can be fetched at a time, the last fetched block number is remembered and used to form subsequent batch.
-
由 Asim R P 提交于
Filerep resync works by obtaining blocks changed since a mirror went down from changetracking (CT) log. The changed blocks are obtained in fixed sized batches. Blocks of the same relation are ordered by block number. The bug occurs when a higher numbered block of a relation is changed such that it has lower LSN as compared to lower numbered blocks. And the higher numbered blocks is not included in the first batch of changed blocks for this relation. Such blocks miss being resynchronized to mirror due to incorret filter based on previously obtained changed blocks' LSN. That means the mirror is eventually declared in-sync with primary but some changed blocks remain only on the primary. This loss in data manifests only when the mirror takes over as primary, upon rebalance or the primary going down.
-
由 Asim R P 提交于
The GUC gp_changetracking_max_rows replaces a compile time constant. Resync worker obtains at the most gp_changetracking_max_rows number of changed blocks from changetracking log at one time. Controling this with a GUC allows exploiting bugs in resync logic around this area.
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
- 12 7月, 2017 3 次提交
-
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
0.9.8 is EOL, 1.0+ version has many security and performance improvements.
-
由 Jesse Zhang 提交于
`enable-cassert` is your friend, yo
-
- 11 7月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
If you have a query like "SELECT COUNT(col1) FROM wide_table", where the table has dozens of columns, the overhead in aocs_getnext() just to figure out which columns need to be fetched becomes noticeable. Optimize it.
-
由 Heikki Linnakangas 提交于
There was a mixture of spaces and tabs being used for indentation in aocsam.c, and I finally got fed up with that while doing other changes in that file. I ran pgindent, and did a bunch of manual fixups of the formatting. All the changes in this commit are purely cosmetic. I did the same for appendonlyam.c, although I'm not changing it at the moment, to keep aocsam.c and appendonlyam.c in sync.
-
由 Heikki Linnakangas 提交于
In aocsam.c, there's a block of code that does: if (...) { AOTupleIdInit_rowNum(...); } else { AOTupleIdInit_rowNum(...); } While hacking, I removed the seemingly unnecessary braces, turning that into just: if (...) AOTupleIdInit_rowNum(...); else AOTupleIdInit_rowNum(...); But then I got a compiler error, about 'else' without 'if'. I was baffled for a moment, until I looked at the definition of AOTupleIdInit_rowNum. The way it includes curly braces makes it not work in an if-else construct like above. These macros also have double-evaluation hazards. To make this more robust, turn the macros into static inline functions. Inline functions generally behave more sanely and are more readable than macros.
-
由 Heikki Linnakangas 提交于
This does mean that we don't free the array quite as quickly as we used to, but it's a drop in the sea. The array is very small, there are much bigger data structures involved in evey AOCS scan that are not freed as quickly, and it's freed at the end of the query in any case.
-
由 Heikki Linnakangas 提交于
Commit fa6c2d43 added two functions, but forgot to add prototypes for them.
-
由 Adam Lee 提交于
Which is important for debugging customers' issues. (log level still matters)
-
由 Ming LI 提交于
1. Log raw string if it can't be decoded as unicode. 2. If similar exception issues in log(), continue processing left log with a warning. 3. If other exception issues in CatThread, log thread exit without blocking worker process, and report warning "gpfdist log halt because Log Thread got an exception:".
-
由 Marbin Tan 提交于
Create a more extensive workload for the sql to make it last longer. The previous sql was completing too fast and so when the actual pid read happens, there pid no longer exists and causes the result to be 0.
-
由 Venkatesh Raghavan 提交于
-
Oops we broke the tests sorry :( This reverts commit 97db5bdd.
-