- 01 10月, 2019 4 次提交
-
-
由 Mel Kiyama 提交于
* docs - promote pxf protocol over s3 protocol --Reordered listing of pxf and s3. --Added link to pxf protocol from s3 protocol --Updated some pxf/s3 information in g-external-tables.xml pxf uses CREATE EXTENSION, s3 uses CREATE PROTOCOL * docs - updated ditamap - to put PXF first under "Working with External Data"
-
由 Chris Hajas 提交于
Corresponding ORCA commits: * Refactor: Simplify property derivation in CExpression * Support on-demand property derivation in CDrvdPropRelational * Rename DrvdPropArray to DrvdProp and DrvdProp2dArray to DrvdPropArray * Bump ORCA version to 3.73.0 Authored-by: NChris Hajas <chajas@pivotal.io> (cherry picked from commit 043365cc)
-
由 Mel Kiyama 提交于
* docs - Add note for gpfdist/gpload wrong line number in error messages This will be backporrted to 6X_STABLE, 5X_STABLE, and 4.3.x * docs - fixed typo
-
由 Mel Kiyama 提交于
-
- 28 9月, 2019 2 次提交
-
-
由 Adam Berlin 提交于
- otherwise the cluster remains running if a test fails, which is problematic for the next test run.
-
由 Adam Berlin 提交于
-
- 27 9月, 2019 15 次提交
-
-
由 Ashwin Agrawal 提交于
gp_tablespace_with_faults test writes no-op record and waits for mirror to replay the same before deleting the tablespace directories. This step fails sometime in CI and causes flaky behavior. The is due to existing code behavior in startup and walreceiver process. If primary writes big (means spanning across multiple pages) xlog record, flushes only partial xlog record due to XLogBackgroundFlush() but restarts before commiting the transaction, mirror only receives partial record and waits to get complete record. Meanwhile after recover, no-op record gets written in place of that big record, startup process on mirror continues to wait to receive xlog beyond previously received point to proceed further. Hence, as temperory workaround till the actual code problem is not resolved and to avoid failures for this test, switch xlog before emitting no-op xlog record, to have no-op record at far distance from previously emitted xlog record. (cherry picked from commit efd76c4c)
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
- introduces test helpers to remove knowledge from the suite file.
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
If the number of rows is always passed around together with the actual rows, there's a missing abstraction.
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
This eliminates the warning "incompatible pointer types" as in: ``` greenplum_five_to_greenplum_six_upgrade_test.c:137:23: warning: incompatible pointer types passing 'User *(*)[10]' to parameter of type 'User **' (aka 'struct UserData **') [-Wincompatible-pointer-types] initialize_user_rows(&rows, size); ^~~~~ ```
-
由 Jesse Zhang 提交于
By declaring them static ``` greenplum_five_to_greenplum_six_upgrade_test.c:35:1: warning: no previous prototype for function 'connectToFive' [-Wmissing-prototypes] connectToFive() ^ ```
-
由 Jesse Zhang 提交于
* Re-ordering Makefile.global and Makefile.mock brings in the much-needed global CFLAGS that turn some warnings into errors * Add missing headers to fix "missing prototype" errors * While we're at it, properly use <> for system / standard headers
-
由 Jesse Zhang 提交于
-
- 26 9月, 2019 6 次提交
-
-
由 Georgios Kokolatos 提交于
The cause of the PANIC was an incorrectly populated list containing the namespace information for the affected the relation. A GrantStmt contains the necessary objects in a list named objects. This gets initially populated during parsing (via the privilege_target rule) and processed during parse analysis based on the target type and object type to RangeVar nodes, FuncWithArgs nodes or plain names. In Greenplum, the catalog information about the partition hierarchies is not propagated to all segments. This information needs to be processed in the dispatcher and to be added backed in the parsed statement for the segments to consume. In this commit, the partition hierarchy information is expanded only for the target and object type required. The parsed statement gets updated uncoditionally of partitioned references before dispatching for required types. The privileges tests have been updated to get check for privileges in the segments also. Problem identified and initial patch by Fenggang <ginobiliwang@gmail.com>, reviewed and refactored by me. (cherry picked from commit 7ba2af39)
-
由 Ashwin Agrawal 提交于
Current code for COPY FROM picks mode as COPY_DISPATCH for non-distributed/non-replicated table as well. This causes crash. It should be using COPY_DIRECT, which is normal/direct mode to be used for such tables. The crash was exposed by following SQL commands: CREATE TABLE public.heap01 (a int, b int) distributed by (a); INSERT INTO public.heap01 VALUES (generate_series(0,99), generate_series(0,98)); ANALYZE public.heap01; COPY (select * from pg_statistic where starelid = 'public.heap01'::regclass) TO '/tmp/heap01.stat'; DELETE FROM pg_statistic where starelid = 'public.heap01'::regclass; COPY pg_statistic from '/tmp/heap01.stat'; Important note: Yes, it's known and strongly recommended to not touch the `pg_statistics` or any other catalog table this way. But it's no good to panic either. The copy to `pg_statictics` is going to ERROR out "correctly" and not crash after this change with `cannot accept a value of type anyarray`, as there just isn't any way at the SQL level to insert data into pg_statistic's anyarray columns. Refer: https://www.postgresql.org/message-id/12138.1277130186%40sss.pgh.pa.us (cherry picked from commit 6793882b)
-
由 David Yozie 提交于
-
由 David Yozie 提交于
-
由 Mel Kiyama 提交于
* docs - move install guide to gpdb repo --move Install Guide source files back to gpdb repo. --update config.yml and gpdb-landing-subnav.erb files for OSS doc builds. --removed refs directory - unused utility reference pages. --Also added more info to creating a gpadmin user. These files have conditionalized text (pivotal and oss-only). ./supported-platforms.xml ./install_gpdb.xml ./apx_mgmt_utils.xml ./install_guide.ditamap ./preinstall_concepts.xml ./migrate.xml ./install_modules.xml ./prep_os.xml ./upgrading.xml * docs - updated supported platforms with PXF information. * docs - install guide review comment update -- renamed one file from supported-platforms.xml to platform-requirements.xml * docs - reworded requirement/warning based on review comments.
-
由 StanleySung 提交于
* add ad steps in pxf krb doc * From Lisa Owen * distributing keytab using gpscp and gpssh * Update gpdb-doc/markdown/pxf/pxf_kerbhdfs.html.md.erb Co-Authored-By: NAlexander Denissov <denalex@users.noreply.github.com> * Update gpdb-doc/markdown/pxf/pxf_kerbhdfs.html.md.erb Co-Authored-By: NAlexander Denissov <denalex@users.noreply.github.com> * misc formatting edits * a few more formatting edits
-
- 25 9月, 2019 3 次提交
-
-
由 Georgios Kokolatos 提交于
Tab completion does not work for centos 7 as reported in #8575 [https://github.com/greenplum-db/gpdb/issues/8575] and installing from the rpms. The root cause is that the binary is linked against libedit (v 0x402) which is providing the necessary `rl_line_buffer` yet with different contents from readline. In libedit, the variable contains the last flushed entry in the history file whereas in readline it contains the current line in the interactive terminal. It is not necessary to link with libedit on the build packages so the flag is removed. Reviewed-by: NBradford D. Boyle <bboyle@pivotal.io> (cherry picked from commit 3827c3a6)
-
由 Lena Hunter 提交于
* edits to AGGREGATE variables * removing unnessesary words * removed STATEFUNC from example
-
由 Adam Berlin 提交于
This issue was causing the build pipeline to go red. Reverting for now. This reverts commit 2154dfae.
-
- 24 9月, 2019 10 次提交
-
-
由 Adam Berlin 提交于
-
由 Fenggang 提交于
It has been discovered in GPDB v.6 and above that a 'GRAND ALL ON ALL TABLES IN SCHEMA XXX TO YYY;' statement will lead to PANIC. From the resulted coredumps, a now obsolete code in QD that tried to encode objects in a partition reference into RangeVars was identified as the culprit. The list that the resulting vars were ancored, was expecting and treating only StrVars. The original code was added following the premise that catalog informations were not available in Segments. Also it tried to optimise caching, yet the code was not fully writen. Instead, the offending block is removed which solves the issue and allows for greater alignment with upstream. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> (cherry picked from commit ba6148c6)
-
由 Ning Yu 提交于
In commit "Check parallel plans correctly" we introduced a regression which was only triggered by the out-tree diskquota tests, now we add a simplified version of the tests to ICW to prevent future regressions. The issue was fixed by "Replace planIsParallel by checking Plan->dispatch flag".
-
由 Heikki Linnakangas 提交于
Commit 7d74aa55 introduced a new function, planIsParallel() to check whether the main plan tree needs the interconnect, by checking whether it contains any Motion nodes. However, we already determine that, in cdbparallelize(), by setting the Plan->dispatch flag. We were just not checking it when deciding whether the interconnect needs to be set up. Let's just check the 'dispatch' flag, like we did earlier in the function, instead of introducing another way of determining whether dispatching is needed. I'm about to get rid of the Plan->nMotionNodes field soon, which is why I don't want any new code to rely on it. (cherry picked from commit c1851b62)
-
由 Paul Guo 提交于
* Ship modified python module subprocess32 again subprocess32 is preferred over subprocess according to python documentation. In addition we long ago modified the code to use vfork() against fork() to avoid some "Cannot allocate memory" kind of error (false alarm though - memory is actually sufficient) on gpdb product environment that is usually with memory overcommit disabled. And we compiled and shipped it also but later it was just compiled but not shipped somehow due to makefile change (maybe a regression). Let's ship it again. * Replace subprocess with our own subprocess32 in python code. Cherry-picked 9c4a885b and da724e8d and a8090c13 and 4354f28c
-
由 Tingfang Bao 提交于
Authored-by: NTingfang Bao <bbao@pivotal.io>
-
由 Tingfang Bao 提交于
Authored-by: NTingfang Bao <bbao@pivotal.io>
-
由 Tingfang Bao 提交于
In order to maintain the gpdb build process better. gp-releng re-organize the build artifacts storage. Only the artifacts path changed, the content is still the same as before. Authored-by: NTingfang Bao <bbao@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Jimmy Yih 提交于
When gp_use_legacy_hashops GUC was set, CTAS would not assign the legacy hash class operator to the new table. This is because CTAS goes through a different code path and uses the first operator class of the SELECT's result when no distribution key is provided. Backported from GPDB master 9040f296. There was one conflict: the cdbhash_int4_ops operator class oid is different between 6X and master (10196 in master vs. 10166 in 6X).
-