- 26 4月, 2017 6 次提交
-
-
由 Lisa Owen 提交于
[ci skip]
-
由 mkiyama 提交于
-
由 Lisa Owen 提交于
* add idle time to session_level_memory_consumption view * add idle time to table intro
-
由 Lisa Owen 提交于
-
由 mkiyama 提交于
* GPDB DOCS - update pl/java topic - remove embedded Java runtime info [ci skip] * update link. [ci skip]
-
由 Todd Sedano 提交于
- This also fixes a bug where we looked for the report file in the wrong directory Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 25 4月, 2017 15 次提交
-
-
由 Heikki Linnakangas 提交于
Commit fb93e7e7 removed this field, but commit 9a817d45 accidentally resurrected it. Remove it again.
-
由 Adam Lee 提交于
-
由 Omer Arap 提交于
-
由 Abhijit Subramanya 提交于
`build_exclude_list()` will return a statically allocated empty string if the number of exlude arguments is zero. We should check for this in the caller before freeing the pointer returned by it.
-
由 Tushar Dadlani 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Jimmy Yih 提交于
Removing old defunct performance directory and replace with a simple way to test AO/CO table loading. This will help measure performance changes when we start generating XLOGs for AO/CO tables and implement WAL replication between primaries and mirrors. This current performance implementation is a bit hacky currently but it gets the job done and does not rely on TPC (which we can't have in our source unfortunately). Current implementation: 1. Create csv data file (specified by NUM_COPIES Makefile variable) 2. Host the csv data file in an external table to load data into a base table faster (and to work around a known memory leak bug when doing many inserts in a LOOP) 3. Create the AO and AOCO tables and start testing loading times sequentially and concurrently 4. Parse the output into a csv to allow separate comparison analysis
-
由 Abhijit Subramanya 提交于
Coverity identified a number of places in which it couldn't prove that a string being copied into a fixed-size buffer would fit. We believe that most, perhaps all of these are in fact safe, or are copying data that is coming from a trusted source so that any overrun is not really a security issue. Nonetheless it seems prudent to forestall any risk by using strlcpy() and similar functions. Fixes by Peter Eisentraut and Jozef Mlich based on Coverity reports. In addition, fix a potential null-pointer-dereference crash in contrib/chkpass. The crypt(3) function is defined to return NULL on failure, but chkpass.c didn't check for that before using the result. The main practical case in which this could be an issue is if libc is configured to refuse to execute unapproved hashing algorithms (e.g., "FIPS mode"). This ideally should've been a separate commit, but since it touches code adjacent to one of the buffer overrun changes, I included it in this commit to avoid last-minute merge issues. This issue was reported by Honza Horak. Security: CVE-2014-0065 for buffer overruns, CVE-2014-0066 for crypt() (cherry picked from commit e3208fec32586076cd1a24cae19ded158a60e15a) Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Abhijit Subramanya 提交于
CID 130131: Copy into fixed size buffer (STRING_OVERFLOW) Coverity reported that we might overrun the buffer `current_path` while trying to copy the return value of `PQgetvalue()` since we don't check its length. Fix this by using `strlcpy()` in order avoid buffer overruns. This is also how it has been fixed upstream. CID 157318: Resource leak (RESOURCE_LEAK) `build_exclude_list()` returns a string which has been allocated memory using malloc and since we never store the return value, the allocated memory can never be freed. Fix this by storing the results of the function in a separate variable and then freeing it once it has been used.
-
由 Heikki Linnakangas 提交于
ORCA can do some optimizations - partition pruning at least - if it can "see" into the elements of an array in a ScalarArrayOpExpr. For example, if you have a qual like "column IN (1, 2, 3)", and the table is partitioned on column, it can eliminate partitions that don't hold those values. The IN-clause is converted into an ScalarArrayOpExpr, so that is really equivalent to "column = ANY <array>" However, ORCA doesn't know how to extract elements from an array-typed Const, so it can only do that if the array in the ScalarArrayOpExpr is an ArrayExpr. Normally, eval_const_expressions() simplifies any ArrayExprs into Const, if all the elements are constants, but we had disabled that when ORCA was used, to keep the ArrayExprs visible to it. There are a couple of reasons why that was not a very good solution. First, while we refrain from converting an ArrayExpr to an array Const, it doesn't help if the argument was an array Const to begin with. The "x IN (1,2,3)" construct is converted to an ArrayExpr by the parser, but we would miss the opportunity if it's written as "x = ANY ('{1,2,3}'::int[])" instead. Secondly, by not simplifying the ArrayExpr, we miss the opportunity to simplify the expression further. For example, if you have a qual like "1 IN (1,2)", we can evaluate that completely at plan time to 'true', but we would not do that with ORCA because the ArrayExpr was not simplified. To be able to also optimize those cases, and to slighty reduce our diff vs upstream in clauses.c, always simplify ArrayExprs to Consts, when possible. To compensate, so that ORCA still sees ArrayExprs rather than array Consts (in those cases where it matters), when a ScalarArrayOpExpr is handed over to ORCA, we check if the argument array is a Const, and convert it (back) to an ArrayExpr if it is. Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
由 Tom Meyer 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
-
Similar to the optimizer, in planner with this change`BitmapIndexScanState` is responsible for freeing the bitmaps during `ExecEnd` or `ExecRescan`. The parent node doesn't need to worry about freeing the bitmaps. This also fix the following error for planner plan(bitmapscan_ao.sql). ``` select count(1) from bmcrash b1, bmcrash b2 where b1.bitmap_col = b2.bitmap_col or b1.bitmap_col = '999' and b1.btree_col1 = 'abcdefg999'; ERROR: non hash bitmap (nbtree.c:572) (seg1 slice2 127.0.0.1:25433 pid=83873) ```
-
`HashBitMap` is owned by `BitmapIndexScanState`(`node->bitmap`). `BitmapIndexScanState` will free the `HashBitMap` when `ExecEnd` is called. So it is not the responsibility of `HashStreamOpaque` to free it which just has the reference to it. Also, it should never try to access the `HashBitMap` when `HashStreamOpaque` gets freed. `HashBitMap` might have been freed even before the `HashStreamOpaque` gets freed.
-
This reverts commit b5d7b30f. `HashStreamOpaque` will never own the `HashBitmap` and hence there is no need of variable `free` in `HashStreamOpaque` to denote if it owns the `HashBitmap` or not. Hence we don't need `tbm_create_stream_node_ref` API at all. Given that `HashStreamOpaque` is not owning the `HashBitMap` it should never try to access the `HashBitMap` during the free of `HashStreamOpaque`. There is no guarantee that `HashBitmap` is valid at that time. `ExecEndBitmapIndexScan` might have freed the `HashBitmap` that is referenced in `HashStreamOpaque`.
-
由 mkiyama 提交于
* GPDB DOCS - Fix issues found when validating files in gpdb-webhelp.ditamap. * fix for GUC gp_connections_per_thread update to GPDB 5 description.
-
- 24 4月, 2017 3 次提交
-
-
由 Daniel Gustafsson 提交于
Avoid inclusion of PTHREAD_CFLAGS where it's not needed, and skip appending -pthread where it is needed since PTHREAD_CFLAGS already contains -pthread.
-
由 Adam Lee 提交于
Let gpcloud generate the config file but not provide one. Before this, s3conf is a base64'd string, not intelligible or easily updated.
-
由 Adam Lee 提交于
This will simplify the dependencies of running gpcloud regression tests.
-
- 23 4月, 2017 1 次提交
-
-
由 Jesse Zhang 提交于
This should make sreh test pass in ORCA CI.
-
- 22 4月, 2017 14 次提交
-
-
由 Heikki Linnakangas 提交于
In assure_collocation_and_order, we check if the lower plan is already distributed on the partitioning key, and insert a Motion node if not. That check was broken. When a query with multiple windows is planned, we create a chain of Group+Sort stages, to compute each aggregate. To do that, we create a synthetic subquery for each stage, and each subquery will have its own minimal range table and target list. The sort and distribution path keys should use expressions that refer to the original plan and PlannerInfo that we started with, but in assure_collocation_and_order(), we were incorrectly building path keys based on the target lists in the intermediate stages. Those are not comparable with path keys referring to the original target list, and as a result. Fixes github issue #2236.
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
-
由 Larry Hamel 提交于
-
由 Larry Hamel 提交于
-
由 Larry Hamel 提交于
Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Heikki Linnakangas 提交于
This was disabled a long time ago, but even after digging into the git history, I could not figure out why. Nothing seems to break with it, and if there's a problem with handling composite types in ORCA, disabling this transformation would just be plastering over it anyway, because there are many other ways that you could end up with composite type constants in a query.
-
由 Jim Doty 提交于
Currently the windows clients and loaders are not packaging correctly. This code will be used when the team responsible for the CLs is ready to fix the issues, and integrate the changes into the full CI for gpdb. Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 David Sharp 提交于
Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Jim Doty 提交于
- Update license paths to fix client and loader build The license header from gpAux/client/install/src/ was moved to the gpaddon repository as it was introducing a non-commercial EULA into the open source repository. The makefile path for the clients and loaders build has been updated to fix the build. - Make the host used for running WiX a variable, and set it to match the infrastructure. We set up a private hosted zone in the the gpdb5-pipeline concourse network, and gave the wix-packaging vm the hostname: `wix-packaging.gpdb5-pipeline-vpc.pivotal.io` Signed-off-by: NJingyi Mei <jmei@pivotal.io> Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 David Sharp 提交于
Fixes Windows clients and loaders build, which uses the default system python. PYTHONHOME was being set to the empty string when BLD_ARCH was win32 (or another arch without an override for PYTHONHOME). Setting PYTHONHOME to the empty string causes python not to find its home, resulting in an error like "distutils module not found". Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Jim Doty 提交于
This commit restores the old packaging code. A separate commit will update the code so that it runs on the the current infrastructure. [#139296853](https://www.pivotaltracker.com/story/show/139296853) Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Heikki Linnakangas 提交于
Upstream commit fc1adce4 (and follow-up commit 9e6dc137) tightened up pg_get_expr() so that you cannot pass an arbitrary string as the argument. It must come from one of the few system catalogs that actually store serialized expressions. GPDB has one more catalog table that stores expression, pg_partition_rule, so list the appropriate pg_partition_rule columns as allowed exceptions, too. Fixes github issue #2000, reported by @jonasbu11
-
- 21 4月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
-