1. 01 10月, 2019 4 次提交
  2. 28 9月, 2019 2 次提交
  3. 27 9月, 2019 15 次提交
  4. 26 9月, 2019 6 次提交
    • G
      Fix GRANT/REVOKE ALL statement PANIC when the schema contains partitioned relations · 510a4f01
      Georgios Kokolatos 提交于
      The cause of the PANIC was an incorrectly populated list containing the
      namespace information for the affected the relation. A GrantStmt contains the
      necessary objects in a list named objects. This gets initially populated during
      parsing (via the privilege_target rule) and processed during parse analysis based
      on the target type and object type to RangeVar nodes, FuncWithArgs nodes or
      plain names.
      
      In Greenplum, the catalog information about the partition hierarchies is not
      propagated to all segments. This information needs to be processed in the
      dispatcher and to be added backed in the parsed statement for the segments to
      consume.
      
      In this commit, the partition hierarchy information is expanded only for the
      target and object type required. The parsed statement gets updated
      uncoditionally of partitioned references before dispatching for required types.
      
      The privileges tests have been updated to get check for privileges in the
      segments also.
      
      Problem identified and initial patch by Fenggang <ginobiliwang@gmail.com>,
      reviewed and refactored by me.
      
      (cherry picked from commit 7ba2af39)
      510a4f01
    • A
      Fix crash in COPY FROM for non-distributed/non-replicated table · 3c5f4c15
      Ashwin Agrawal 提交于
      Current code for COPY FROM picks mode as COPY_DISPATCH for
      non-distributed/non-replicated table as well. This causes crash. It
      should be using COPY_DIRECT, which is normal/direct mode to be used
      for such tables.
      
      The crash was exposed by following SQL commands:
      
          CREATE TABLE public.heap01 (a int, b int) distributed by (a);
          INSERT INTO public.heap01 VALUES (generate_series(0,99), generate_series(0,98));
          ANALYZE public.heap01;
      
          COPY (select * from pg_statistic where starelid = 'public.heap01'::regclass) TO '/tmp/heap01.stat';
          DELETE FROM pg_statistic where starelid = 'public.heap01'::regclass;
          COPY pg_statistic from '/tmp/heap01.stat';
      
      Important note: Yes, it's known and strongly recommended to not touch
      the `pg_statistics` or any other catalog table this way. But it's no
      good to panic either. The copy to `pg_statictics` is going to ERROR
      out "correctly" and not crash after this change with `cannot accept a
      value of type anyarray`, as there just isn't any way at the SQL level
      to insert data into pg_statistic's anyarray columns. Refer:
      https://www.postgresql.org/message-id/12138.1277130186%40sss.pgh.pa.us
      
      (cherry picked from commit 6793882b)
      3c5f4c15
    • D
      Docs: fix condition in pivotal .ditaval · 92ac46fc
      David Yozie 提交于
      92ac46fc
    • D
    • M
      docs - move install guide to gpdb repo (#8666) · 3373508e
      Mel Kiyama 提交于
      * docs - move install guide to gpdb repo
      
      --move Install Guide source files back to gpdb repo.
      --update config.yml and gpdb-landing-subnav.erb files for OSS doc builds.
      --removed refs directory - unused utility reference pages.
      --Also added more info to creating a gpadmin user.
      
      These files have conditionalized text (pivotal and oss-only).
      
      ./supported-platforms.xml
      ./install_gpdb.xml
      ./apx_mgmt_utils.xml
      ./install_guide.ditamap
      ./preinstall_concepts.xml
      ./migrate.xml
      ./install_modules.xml
      ./prep_os.xml
      ./upgrading.xml
      
      * docs - updated supported platforms with PXF information.
      
      * docs - install guide review comment update
      
      -- renamed one file from supported-platforms.xml to platform-requirements.xml
      
      * docs - reworded requirement/warning based on review comments.
      3373508e
    • S
      docs: add active directory kerberos steps for pxf (#7055) · 693b28e1
      StanleySung 提交于
      * add ad steps in pxf krb doc
      
      * From Lisa Owen
      
      * distributing keytab using gpscp and gpssh
      
      * Update gpdb-doc/markdown/pxf/pxf_kerbhdfs.html.md.erb
      Co-Authored-By: NAlexander Denissov <denalex@users.noreply.github.com>
      
      * Update gpdb-doc/markdown/pxf/pxf_kerbhdfs.html.md.erb
      Co-Authored-By: NAlexander Denissov <denalex@users.noreply.github.com>
      
      * misc formatting edits
      
      * a few more formatting edits
      693b28e1
  5. 25 9月, 2019 3 次提交
  6. 24 9月, 2019 10 次提交
    • A
      Set up PR pipeline for pg upgrade 6X_STABLE · 346af859
      Adam Berlin 提交于
      346af859
    • F
      Fix issue for "grant all on all tables in schema xxx to yyy;" · 2154dfae
      Fenggang 提交于
      It has been discovered in GPDB v.6 and above that a 'GRAND ALL ON ALL TABLES IN
      SCHEMA XXX TO YYY;' statement will lead to PANIC.
      
      From the resulted coredumps, a now obsolete code in QD that tried to encode
      objects in a partition reference into RangeVars was identified as the culprit.
      The list that the resulting vars were ancored, was expecting and treating only
      StrVars. The original code was added following the premise that catalog
      informations were not available in Segments. Also it tried to optimise caching,
      yet the code was not fully writen.
      
      Instead, the offending block is removed which solves the issue and allows for
      greater alignment with upstream.
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      (cherry picked from commit ba6148c6)
      2154dfae
    • N
      Add a regression test for parallel plan dispatching · c4618266
      Ning Yu 提交于
      In commit "Check parallel plans correctly" we introduced a regression
      which was only triggered by the out-tree diskquota tests, now we add a
      simplified version of the tests to ICW to prevent future regressions.
      
      The issue was fixed by "Replace planIsParallel by checking
      Plan->dispatch flag".
      c4618266
    • H
      Replace planIsParallel by checking Plan->dispatch flag. · d750cac7
      Heikki Linnakangas 提交于
      Commit 7d74aa55 introduced a new function, planIsParallel() to check
      whether the main plan tree needs the interconnect, by checking whether
      it contains any Motion nodes. However, we already determine that, in
      cdbparallelize(), by setting the Plan->dispatch flag. We were just not
      checking it when deciding whether the interconnect needs to be set up.
      Let's just check the 'dispatch' flag, like we did earlier in the
      function, instead of introducing another way of determining whether
      dispatching is needed.
      
      I'm about to get rid of the Plan->nMotionNodes field soon, which is why
      I don't want any new code to rely on it.
      
      (cherry picked from commit c1851b62)
      d750cac7
    • P
      Ship subprocess32 and replace subprocess with it in python code (#8658) · 7e44dbf1
      Paul Guo 提交于
      * Ship modified python module subprocess32 again
      
      subprocess32 is preferred over subprocess according to python documentation.
      In addition we long ago modified the code to use vfork() against fork() to
      avoid some "Cannot allocate memory" kind of error (false alarm though - memory
      is actually sufficient) on gpdb product environment that is usually with memory
      overcommit disabled.  And we compiled and shipped it also but later it was just
      compiled but not shipped somehow due to makefile change (maybe a regression).
      Let's ship it again.
      
      * Replace subprocess with our own subprocess32 in python code.
      
      Cherry-picked 9c4a885b and
                    da724e8d and
                    a8090c13 and
                    4354f28c
      7e44dbf1
    • T
      Update the madlib build artifacts path · bb7c4383
      Tingfang Bao 提交于
      Authored-by: NTingfang Bao <bbao@pivotal.io>
      bb7c4383
    • T
      15dbb685
    • T
      Update the gpdb internal build artifacts path (#8678) · 1b816058
      Tingfang Bao 提交于
      In order to maintain the gpdb build process better.
        gp-releng re-organize the build artifacts storage.
      
        Only the artifacts path changed, the content is still
        the same as before.
      Authored-by: NTingfang Bao <bbao@pivotal.io>
      1b816058
    • L
      9e93ba8b
    • J
      Fix CTAS with gp_use_legacy_hashops GUC · b3fb9c3f
      Jimmy Yih 提交于
      When gp_use_legacy_hashops GUC was set, CTAS would not assign the
      legacy hash class operator to the new table. This is because CTAS goes
      through a different code path and uses the first operator class of the
      SELECT's result when no distribution key is provided.
      
      Backported from GPDB master 9040f296. There was one conflict:
      the cdbhash_int4_ops operator class oid is different between 6X and
      master (10196 in master vs. 10166 in 6X).
      b3fb9c3f