- 27 4月, 2017 5 次提交
-
-
由 David Yozie 提交于
* moving linux kerberos client instructions to dedicated topic, and placing it parallel with existing windows topic * removing map and dita files for connectivity package, which is no longer provided * removing more references to connectivity packages; conditionalizing postgres odbc/jdbc vs datadirect drivers for oss/pivotal audiences * adding .ditaval for handling pivotal conditions; [ci skip] * relocating pivotal ditaval
-
由 Jamie McAtamney 提交于
Commit ca028ee8 reformatted queries to capitalize keywords, accidentally capitalizing a "%s" into a "%S". This commit fixes that. the quoting of the cast to fix the syntax error.
-
This test is to validate bitmap works fine even if complex bitmap and / or logic on the inner side of nested loop.
-
由 Heikki Linnakangas 提交于
These are the changes required to accommodate ORCA changes from PR: https://github.com/greenplum-db/gporca/pull/159 Bumps up ORCA version to v2.29.0 Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
由 Christopher Hajas 提交于
Commit ca028ee8 reformatted the queries to capitalize keywords. Remove the quoting of the cast to fix the syntax error.
-
- 26 4月, 2017 20 次提交
-
-
由 Shreedhar Hardikar 提交于
-
由 C.J. Jameson 提交于
- Before this change, if you were iterating in a container and wanted to run the compilation again, you would have to remove gpaddon because its presence caused the `cp` to fail Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Melanie Plageman 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Melanie Plageman 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Melanie Plageman 提交于
Modify previous behave tests to address test flakiness -- Use gpstop instead of pkill -- Reduce min_query_time to 0 Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Omer Arap 提交于
Bump orca version to 2.28.0 Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
由 C.J. Jameson 提交于
Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Karen Huddleston 提交于
- This was missed in commit aaa88b21
-
由 Shreedhar Hardikar 提交于
When creating HHashTable, instead of using the available memory as the sole basis to determine the number of buckets, it now computes nbuckets as a function of estimated groups/entries given by the planner. To prevent performance degradation when the statistics are off, the hash table expands by doubling the number of buckets and rehashing all the entries until it is out of memory. If more space is needed, HHashTable spills to disk as before, but it can now accurately allocate buckets when the spill files are reloaded based on the number of entries spilled. This commit also makes other minor fixes: - Change calcHashAggTableSizes() signature to make it reusable - Keep track of in-memory entries in the HT - Add tests for when it overflows multiple times - Estimate the overhead per entry in the hash table more acurately - Refactor statistics collection for EXPLAIN ANALZYE
-
由 David Yozie 提交于
[ci skip]
-
由 Lisa Owen 提交于
[ci skip]
-
由 mkiyama 提交于
-
由 Lisa Owen 提交于
* add idle time to session_level_memory_consumption view * add idle time to table intro
-
由 Lisa Owen 提交于
-
由 mkiyama 提交于
* GPDB DOCS - update pl/java topic - remove embedded Java runtime info [ci skip] * update link. [ci skip]
-
由 Todd Sedano 提交于
- This also fixes a bug where we looked for the report file in the wrong directory Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 25 4月, 2017 15 次提交
-
-
由 Heikki Linnakangas 提交于
Commit fb93e7e7 removed this field, but commit 9a817d45 accidentally resurrected it. Remove it again.
-
由 Adam Lee 提交于
-
由 Omer Arap 提交于
-
由 Abhijit Subramanya 提交于
`build_exclude_list()` will return a statically allocated empty string if the number of exlude arguments is zero. We should check for this in the caller before freeing the pointer returned by it.
-
由 Tushar Dadlani 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Jimmy Yih 提交于
Removing old defunct performance directory and replace with a simple way to test AO/CO table loading. This will help measure performance changes when we start generating XLOGs for AO/CO tables and implement WAL replication between primaries and mirrors. This current performance implementation is a bit hacky currently but it gets the job done and does not rely on TPC (which we can't have in our source unfortunately). Current implementation: 1. Create csv data file (specified by NUM_COPIES Makefile variable) 2. Host the csv data file in an external table to load data into a base table faster (and to work around a known memory leak bug when doing many inserts in a LOOP) 3. Create the AO and AOCO tables and start testing loading times sequentially and concurrently 4. Parse the output into a csv to allow separate comparison analysis
-
由 Abhijit Subramanya 提交于
Coverity identified a number of places in which it couldn't prove that a string being copied into a fixed-size buffer would fit. We believe that most, perhaps all of these are in fact safe, or are copying data that is coming from a trusted source so that any overrun is not really a security issue. Nonetheless it seems prudent to forestall any risk by using strlcpy() and similar functions. Fixes by Peter Eisentraut and Jozef Mlich based on Coverity reports. In addition, fix a potential null-pointer-dereference crash in contrib/chkpass. The crypt(3) function is defined to return NULL on failure, but chkpass.c didn't check for that before using the result. The main practical case in which this could be an issue is if libc is configured to refuse to execute unapproved hashing algorithms (e.g., "FIPS mode"). This ideally should've been a separate commit, but since it touches code adjacent to one of the buffer overrun changes, I included it in this commit to avoid last-minute merge issues. This issue was reported by Honza Horak. Security: CVE-2014-0065 for buffer overruns, CVE-2014-0066 for crypt() (cherry picked from commit e3208fec32586076cd1a24cae19ded158a60e15a) Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Abhijit Subramanya 提交于
CID 130131: Copy into fixed size buffer (STRING_OVERFLOW) Coverity reported that we might overrun the buffer `current_path` while trying to copy the return value of `PQgetvalue()` since we don't check its length. Fix this by using `strlcpy()` in order avoid buffer overruns. This is also how it has been fixed upstream. CID 157318: Resource leak (RESOURCE_LEAK) `build_exclude_list()` returns a string which has been allocated memory using malloc and since we never store the return value, the allocated memory can never be freed. Fix this by storing the results of the function in a separate variable and then freeing it once it has been used.
-
由 Heikki Linnakangas 提交于
ORCA can do some optimizations - partition pruning at least - if it can "see" into the elements of an array in a ScalarArrayOpExpr. For example, if you have a qual like "column IN (1, 2, 3)", and the table is partitioned on column, it can eliminate partitions that don't hold those values. The IN-clause is converted into an ScalarArrayOpExpr, so that is really equivalent to "column = ANY <array>" However, ORCA doesn't know how to extract elements from an array-typed Const, so it can only do that if the array in the ScalarArrayOpExpr is an ArrayExpr. Normally, eval_const_expressions() simplifies any ArrayExprs into Const, if all the elements are constants, but we had disabled that when ORCA was used, to keep the ArrayExprs visible to it. There are a couple of reasons why that was not a very good solution. First, while we refrain from converting an ArrayExpr to an array Const, it doesn't help if the argument was an array Const to begin with. The "x IN (1,2,3)" construct is converted to an ArrayExpr by the parser, but we would miss the opportunity if it's written as "x = ANY ('{1,2,3}'::int[])" instead. Secondly, by not simplifying the ArrayExpr, we miss the opportunity to simplify the expression further. For example, if you have a qual like "1 IN (1,2)", we can evaluate that completely at plan time to 'true', but we would not do that with ORCA because the ArrayExpr was not simplified. To be able to also optimize those cases, and to slighty reduce our diff vs upstream in clauses.c, always simplify ArrayExprs to Consts, when possible. To compensate, so that ORCA still sees ArrayExprs rather than array Consts (in those cases where it matters), when a ScalarArrayOpExpr is handed over to ORCA, we check if the argument array is a Const, and convert it (back) to an ArrayExpr if it is. Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
由 Tom Meyer 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
-
Similar to the optimizer, in planner with this change`BitmapIndexScanState` is responsible for freeing the bitmaps during `ExecEnd` or `ExecRescan`. The parent node doesn't need to worry about freeing the bitmaps. This also fix the following error for planner plan(bitmapscan_ao.sql). ``` select count(1) from bmcrash b1, bmcrash b2 where b1.bitmap_col = b2.bitmap_col or b1.bitmap_col = '999' and b1.btree_col1 = 'abcdefg999'; ERROR: non hash bitmap (nbtree.c:572) (seg1 slice2 127.0.0.1:25433 pid=83873) ```
-
`HashBitMap` is owned by `BitmapIndexScanState`(`node->bitmap`). `BitmapIndexScanState` will free the `HashBitMap` when `ExecEnd` is called. So it is not the responsibility of `HashStreamOpaque` to free it which just has the reference to it. Also, it should never try to access the `HashBitMap` when `HashStreamOpaque` gets freed. `HashBitMap` might have been freed even before the `HashStreamOpaque` gets freed.
-
This reverts commit b5d7b30f. `HashStreamOpaque` will never own the `HashBitmap` and hence there is no need of variable `free` in `HashStreamOpaque` to denote if it owns the `HashBitmap` or not. Hence we don't need `tbm_create_stream_node_ref` API at all. Given that `HashStreamOpaque` is not owning the `HashBitMap` it should never try to access the `HashBitMap` during the free of `HashStreamOpaque`. There is no guarantee that `HashBitmap` is valid at that time. `ExecEndBitmapIndexScan` might have freed the `HashBitmap` that is referenced in `HashStreamOpaque`.
-
由 mkiyama 提交于
* GPDB DOCS - Fix issues found when validating files in gpdb-webhelp.ditamap. * fix for GUC gp_connections_per_thread update to GPDB 5 description.
-