- 04 10月, 2019 3 次提交
-
-
由 Lisa Owen 提交于
* docs - move gpmapreduce yaml info to own utility page; misc topic edits * relocate topic and graphic, add shortdesc, fix linking
-
由 Bhuvnesh Chaudhary 提交于
Earlier, COPY <Catalogtable> from <file> was allowed irrespective of the value of allow_system_table_mods. This commit restricts such statements only when allow_system_table_mods is set to ON. Co-Authored-By: Ashwin Agrawal<aagrawal@pivotal.io>
-
由 Lisa Owen 提交于
-
- 03 10月, 2019 1 次提交
-
-
由 Shreedhar Hardikar 提交于
This reverts commit 1db9b27a. This is breaking a memory accounting test.
-
- 02 10月, 2019 6 次提交
-
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Chris Hajas 提交于
We introduce a new type of memory pool and memory pool manager: CMemoryPoolPalloc and CMemoryPoolPallocManager The motivation for this PR is to improve memory allocation/deallocation performance when using GPDB allocators. Additionally, we would like to use the GPDB memory allocators by default (change the default for optimizer_use_gpdb_allocators to on), to prevent ORCA from crashing when we run out of memory (OOM). However, with the current way of doing things, doing so would add around 10 % performance overhead to ORCA. CMemoryPoolPallocManager overrides the default CMemoryPoolManager in ORCA, and instead creates a CMemoryPoolPalloc memory pool instead of a CMemoryPoolTracker. In CMemoryPoolPalloc, we now call MemoryContextAlloc and pfree instead of gp_malloc and gp_free, and we don’t do any memory accounting. So where does the performance improvement come from? Previously, we would (essentially) pass in gp_malloc and gp_free to an underlying allocation structure (which has been removed on the ORCA side). However, we would add additional headers and overhead to maintain a list of all of these allocations. When tearing down the memory pool, we would iterate through the list of allocations and explicitly free each one. So we would end up doing overhead on the ORCA side, AND the GPDB side. However, the overhead on both sides was quite expensive! If you want to compare against the previous implementation, see the Allocate and Teardown functions in CMemoryPoolTracker. With this PR, we improve optimization time by ~15% on average and up to 30-40% on some queries which are memory intensive. This PR does remove memory accounting in ORCA. This was only enabled when the optimizer_use_gpdb_allocators GUC was set. By setting `optimizer_use_gpdb_allocators`, we still capture the memory used when optimizing a query in ORCA, without the overhead of the memory accounting framework. Additionally, Add a top level ORCA context where new contexts are created The OptimizerMemoryContext is initialized in InitPostgres(). For each memory pool in ORCA, a new memory context is created in OptimizerMemoryContext. Bumps ORCA version to 3.74.0 Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io> Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Adam Berlin 提交于
- as an experiment. We want to see if it becomes less flakey. (cherry picked from commit b99920ff)
-
- 01 10月, 2019 7 次提交
-
-
由 Georgios Kokolatos 提交于
Otherwise it is necessary to always pass the flag set to 'no' for the package builds. This commit completes <3827c3a6> regarding tab completion. Reported-by: NBradford D. Boyle <bboyle@pivotal.io> Reviewed-by: NBradford D. Boyle <bboyle@pivotal.io> Reviewed-by: NXin Zhang <xzhang@pivotal.io> (cherry picked from commit 15f80832)
-
由 Bradford D. Boyle 提交于
When building gpdb with a non-empty DESTDIR, the build would fail because certain Makefiles were not correctly account for it. Additionally, when we make a symlink for `gpcheckcat` we *should not* include DESTDIR in the target. A common usage for DESTDIR is to allow moving the build artifacts after the build is completed. If the target includes DESTDIR, than the link could point to a non-existing path. Authored-by: NBradford D. Boyle <bboyle@pivotal.io> (cherry picked from commit 7de35b7e)
-
由 Mel Kiyama 提交于
* docs - add GUC optimizer_enable_dml * docs - review comment updates. Add guc to table. * docs - add optimizer_enable_dml GUC to ditamap
-
由 Mel Kiyama 提交于
* docs - promote pxf protocol over s3 protocol --Reordered listing of pxf and s3. --Added link to pxf protocol from s3 protocol --Updated some pxf/s3 information in g-external-tables.xml pxf uses CREATE EXTENSION, s3 uses CREATE PROTOCOL * docs - updated ditamap - to put PXF first under "Working with External Data"
-
由 Chris Hajas 提交于
Corresponding ORCA commits: * Refactor: Simplify property derivation in CExpression * Support on-demand property derivation in CDrvdPropRelational * Rename DrvdPropArray to DrvdProp and DrvdProp2dArray to DrvdPropArray * Bump ORCA version to 3.73.0 Authored-by: NChris Hajas <chajas@pivotal.io> (cherry picked from commit 043365cc)
-
由 Mel Kiyama 提交于
* docs - Add note for gpfdist/gpload wrong line number in error messages This will be backporrted to 6X_STABLE, 5X_STABLE, and 4.3.x * docs - fixed typo
-
由 Mel Kiyama 提交于
-
- 28 9月, 2019 2 次提交
-
-
由 Adam Berlin 提交于
- otherwise the cluster remains running if a test fails, which is problematic for the next test run.
-
由 Adam Berlin 提交于
-
- 27 9月, 2019 15 次提交
-
-
由 Ashwin Agrawal 提交于
gp_tablespace_with_faults test writes no-op record and waits for mirror to replay the same before deleting the tablespace directories. This step fails sometime in CI and causes flaky behavior. The is due to existing code behavior in startup and walreceiver process. If primary writes big (means spanning across multiple pages) xlog record, flushes only partial xlog record due to XLogBackgroundFlush() but restarts before commiting the transaction, mirror only receives partial record and waits to get complete record. Meanwhile after recover, no-op record gets written in place of that big record, startup process on mirror continues to wait to receive xlog beyond previously received point to proceed further. Hence, as temperory workaround till the actual code problem is not resolved and to avoid failures for this test, switch xlog before emitting no-op xlog record, to have no-op record at far distance from previously emitted xlog record. (cherry picked from commit efd76c4c)
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
- introduces test helpers to remove knowledge from the suite file.
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
If the number of rows is always passed around together with the actual rows, there's a missing abstraction.
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
This eliminates the warning "incompatible pointer types" as in: ``` greenplum_five_to_greenplum_six_upgrade_test.c:137:23: warning: incompatible pointer types passing 'User *(*)[10]' to parameter of type 'User **' (aka 'struct UserData **') [-Wincompatible-pointer-types] initialize_user_rows(&rows, size); ^~~~~ ```
-
由 Jesse Zhang 提交于
By declaring them static ``` greenplum_five_to_greenplum_six_upgrade_test.c:35:1: warning: no previous prototype for function 'connectToFive' [-Wmissing-prototypes] connectToFive() ^ ```
-
由 Jesse Zhang 提交于
* Re-ordering Makefile.global and Makefile.mock brings in the much-needed global CFLAGS that turn some warnings into errors * Add missing headers to fix "missing prototype" errors * While we're at it, properly use <> for system / standard headers
-
由 Jesse Zhang 提交于
-
- 26 9月, 2019 6 次提交
-
-
由 Georgios Kokolatos 提交于
The cause of the PANIC was an incorrectly populated list containing the namespace information for the affected the relation. A GrantStmt contains the necessary objects in a list named objects. This gets initially populated during parsing (via the privilege_target rule) and processed during parse analysis based on the target type and object type to RangeVar nodes, FuncWithArgs nodes or plain names. In Greenplum, the catalog information about the partition hierarchies is not propagated to all segments. This information needs to be processed in the dispatcher and to be added backed in the parsed statement for the segments to consume. In this commit, the partition hierarchy information is expanded only for the target and object type required. The parsed statement gets updated uncoditionally of partitioned references before dispatching for required types. The privileges tests have been updated to get check for privileges in the segments also. Problem identified and initial patch by Fenggang <ginobiliwang@gmail.com>, reviewed and refactored by me. (cherry picked from commit 7ba2af39)
-
由 Ashwin Agrawal 提交于
Current code for COPY FROM picks mode as COPY_DISPATCH for non-distributed/non-replicated table as well. This causes crash. It should be using COPY_DIRECT, which is normal/direct mode to be used for such tables. The crash was exposed by following SQL commands: CREATE TABLE public.heap01 (a int, b int) distributed by (a); INSERT INTO public.heap01 VALUES (generate_series(0,99), generate_series(0,98)); ANALYZE public.heap01; COPY (select * from pg_statistic where starelid = 'public.heap01'::regclass) TO '/tmp/heap01.stat'; DELETE FROM pg_statistic where starelid = 'public.heap01'::regclass; COPY pg_statistic from '/tmp/heap01.stat'; Important note: Yes, it's known and strongly recommended to not touch the `pg_statistics` or any other catalog table this way. But it's no good to panic either. The copy to `pg_statictics` is going to ERROR out "correctly" and not crash after this change with `cannot accept a value of type anyarray`, as there just isn't any way at the SQL level to insert data into pg_statistic's anyarray columns. Refer: https://www.postgresql.org/message-id/12138.1277130186%40sss.pgh.pa.us (cherry picked from commit 6793882b)
-
由 David Yozie 提交于
-
由 David Yozie 提交于
-
由 Mel Kiyama 提交于
* docs - move install guide to gpdb repo --move Install Guide source files back to gpdb repo. --update config.yml and gpdb-landing-subnav.erb files for OSS doc builds. --removed refs directory - unused utility reference pages. --Also added more info to creating a gpadmin user. These files have conditionalized text (pivotal and oss-only). ./supported-platforms.xml ./install_gpdb.xml ./apx_mgmt_utils.xml ./install_guide.ditamap ./preinstall_concepts.xml ./migrate.xml ./install_modules.xml ./prep_os.xml ./upgrading.xml * docs - updated supported platforms with PXF information. * docs - install guide review comment update -- renamed one file from supported-platforms.xml to platform-requirements.xml * docs - reworded requirement/warning based on review comments.
-
由 StanleySung 提交于
* add ad steps in pxf krb doc * From Lisa Owen * distributing keytab using gpscp and gpssh * Update gpdb-doc/markdown/pxf/pxf_kerbhdfs.html.md.erb Co-Authored-By: NAlexander Denissov <denalex@users.noreply.github.com> * Update gpdb-doc/markdown/pxf/pxf_kerbhdfs.html.md.erb Co-Authored-By: NAlexander Denissov <denalex@users.noreply.github.com> * misc formatting edits * a few more formatting edits
-