1. 08 7月, 2020 2 次提交
  2. 07 7月, 2020 5 次提交
    • X
      Alter table add column on AOCS table inherits the default storage settings · 9a574915
      xiong-gang 提交于
      When alter table add a column to AOCS table, the storage setting (compresstype,
      compresslevel and blocksize) of the new column can be specified in the ENCODING
      clause; it inherits the setting from the table if ENCODING is not specified; it
      will use the value from GUC 'gp_default_storage_options' when the table dosen't
      have the compression configuration.
      9a574915
    • X
      Fix flaky test gp_replica_check · a1a0af55
      xiong-gang 提交于
      When there is a big lag between primary and mirror replay, gp_replica_check
      will fail if the checkpoint is not replayed in about 60 seconds. Extend the
      timeout to 600 seconds to reduce the chance of flaky.
      a1a0af55
    • H
      Disallow the replicated table inherit or to be inherited (#10344) · dc4b839e
      Hao Wu 提交于
      Currently, replicated tables are not allowed to inherit a parent
      table. But ALTER TABLE .. INHERIT can pass around the restriction.
      
      On the other hand, a replicated table is allowed to be inherited
      by a hash distributed table. It makes things much complicated.
      When the parent table is declared as a replicated table inherited by
      a hash distributed table, its data on the parent is replicated
      but the data on the child is hash distributed. When running
      `select * from parent;`, the generated plan is:
      ```
      gpadmin=# explain select * from parent;
                                       QUERY PLAN
      -----------------------------------------------------------------------------
       Gather Motion 3:1  (slice1; segments: 3)  (cost=0.00..4.42 rows=14 width=6)
         ->  Append  (cost=0.00..4.14 rows=5 width=6)
               ->  Result  (cost=0.00..1.20 rows=4 width=7)
                     One-Time Filter: (gp_execution_segment() = 1)
                     ->  Seq Scan on parent  (cost=0.00..1.10 rows=4 width=7)
               ->  Seq Scan on child  (cost=0.00..3.04 rows=2 width=4)
       Optimizer: Postgres query optimizer
      (7 rows)
      ```
      It's not particularly useful for the parent table to be replicated.
      So, we disallow the replicated table to be inherited.
      Reported-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      dc4b839e
    • C
      Move slack command trigger repo · 8e6a46f7
      Chris Hajas 提交于
      We've moved the repo that holds trigger commits to a private repo since
      there wasn't anything interesting there.
      8e6a46f7
    • A
      Fix vacuum on temporary AO table · 327abdb5
      Ashwin Agrawal 提交于
      The path constructed in OpenAOSegmentFile() didn't take into account
      "t_" semantic of filename. Ideally, the correct filename is passed to
      function, so no need to construct the same.
      
      Would be better if can move MakeAOSegmentFileName() inside
      OpenAOSegmentFile(), as all callers call it except
      truncate_ao_perFile(), which doesn't fit that model.
      327abdb5
  3. 06 7月, 2020 1 次提交
    • (
      Fix bitmap scan crash issue for AO/AOCS table. (#10407) · cb5d18d1
      (Jerome)Junfeng Yang 提交于
      When ExecReScanBitmapHeapScan get executed, bitmap state (tbmiterator
      and tbmres) gets freed in freeBitmapState. So the tbmres is NULL, and we
      need to reinit bitmap state to start scan from the beginning and reset AO/AOCS
      bitmap pages' flags(baos_gotpage, baos_lossy, baos_cindex and baos_ntuples).
      
      Especially when ExecReScan happens on the bitmap append only scan and
      not all the matched tuples in bitmap are consumed, for example, Bitmap
      Heap Scan as inner plan of the Nest Loop Semi Join. If tbmres not get init,
      and not read all tuples in last bitmap, BitmapAppendOnlyNext will assume the
      current bitmap page still has data to return. but bitmap state already freed.
      
      From the code, for Nest Loop Semi Join, when a match find, a new outer slot is
      requested, and then `ExecReScanBitmapHeapScan` get called, `node->tbmres` and
      `node->tbmiterator` set to NULL. `node->baos_gotpage` still keeps true.
      When execute `BitmapAppendOnlyNext`, it skip create new `node->tbmres`.
      And jump to access `tbmres->recheck`.
      Reviewed-by: NJinbao Chen <jinchen@pivotal.io>
      Reviewed-by: NAsim R P <pasim@vmware.com>
      cb5d18d1
  4. 03 7月, 2020 4 次提交
  5. 02 7月, 2020 2 次提交
    • M
      docs - add GUC -write_to_gpfdist_timeout (#10391) · 86a53828
      Mel Kiyama 提交于
      * docs - add GUC -write_to_gpfdist_timeout
      
      Add GUC and add link to GUC in gpfdist reference
      
      * docs - correct default value 600 --> 300. Fix xref.
      86a53828
    • H
      Fix Orca optimizer search stage couldn't measure elapsed time correctly · db25c3c8
      Haisheng Yuan 提交于
      Previously, CTimerUser didn't initialize timer, so the elapsed time provided by
      Orca was not meaningful, sometimes confusing.
      
      When traceflag T101012 is turned on, we can see the following trace message:
      
      [OPT]: Memo (stage 0): [20 groups, 0 duplicate groups, 44 group expressions, 4 activated xforms]
      [OPT]: stage 0 completed in 860087 msec,  plan with cost 1028.470667 was found
      [OPT]: <Begin Xforms - stage 0>
      ......
      [OPT]: <End Xforms - stage 0>
      [OPT]: Search terminated at stage 1/1
      [OPT]: Total Optimization Time: 67ms
      
      As shown above, the stage 0 elapsed timer is much greater than the total
      optimization time, which is obviously incorrect.
      db25c3c8
  6. 01 7月, 2020 3 次提交
    • (
      Let FTS mark mirror down if replication keeps crash. (#10327) · 252ba888
      (Jerome)Junfeng Yang 提交于
      For GPDB FTS, if the primary, mirror replication keeps crash
      continuously and attempt to crate replication connection too many times,
      FTS should mark the mirror down. Otherwise, it may block other
      processes.
      If the WAL starts streaming, clear the attempt count to 0. This is because the blocked
      transaction can only be released once the WAL in streaming state.
      
      The solution for this is:
      
      1. Use ` FTSReplicationStatus` which under `gp_replication.c`  to track current primary-mirror
      replication status. This includes:
          - A continuous failure counter. The counter gets reset once the replication
          starts streaming, or replication restarted.
          - A record of the last disconnect timestamp which is refactored from
          `WalSnd` slot.
          The reason for moving this is: When FTS probe happens, the `WalSnd`
          slot may already get freed. And `WalSnd` slot is designed reusable.
          It's hacky to read value from a freed slot in shared memory.
      
      2. When handling each probe query, `GetMirrorStatus` will check the current
      mirror status and the failure count from walsender's application ` FTSReplicationStatus`.
      If the count exceeds the limit, the retry test will ignore the last replication
      disconnect time since it gets refreshed when new walsender starts. (Since
      in the current case, the walsender keeps restart.)
      
      3. On FTS bgworker. If mirror down and retry set to false, mark the mirror
      down.
      
      A `gp_fts_replication_attempt_count` GUC is added. When the replication failure count
      exceed this GUC, ignore the last replication disconnect time when checking for mirror
      probe retry.
      
      The life cycle of a ` FTSReplicationStatus`:
      1. It gets created when first enable replication during the replication
      start phase. Each replication's sender should have a unique
      `application_name`, which also used to specify the replication priority
      in multi-mirror env. So ` FTSReplicationStatus` uses the `application_name` mark
      itself.
      
      2. The ` FTSReplicationStatus` for replication will exist until FTS detects
      failure and stop the replication between primary and mirror. Then
      ` FTSReplicationStatus` for that `application_name` will be dropped.
      
      Now the `FTSReplicationStatus` is used only for GPDB primary-mirror replication.
      252ba888
    • Z
      Make QEs does not use the GUC gp_enable_global_deadlock_detector · 855f1548
      Zhenghua Lyu 提交于
      Previously, there are some code executed in QEs that will check
      the value of the GUC gp_enable_global_deadlock_detector. The
      historical reason is that:
        Before we have GDD, UPDATE|DELETE operations cannot be
        concurrently executed, and we do not meet the EvalPlanQual
        issue and concurrent split update issue. After we have GDD,
        we meet such issues and add code to resolve them, and we
        add the code `if (gp_enable_global_deadlock_detector)` at
        the very first. It is just like habitual thinking.
      
      In fact, we do not rely on it in QEs and I tried to remove this
      in the context: https://github.com/greenplum-db/gpdb/pull/9992#pullrequestreview-402364938.
      I tried to add an assert there but could not handle utility mode
      as Heikki's comments. To continue that idea, we can just remove
      the check of gp_enable_global_deadlock_detector. This brings two
      benefits:
        1. Some users only change this GUC on master. By removing the usage
           in QEs, it is safe now to make the GUC master only.
        2. We can bring back the skills to only restart master node's postmaster
           to enable GDD, this save a lot of time in pipeline. This commit
           also do this for the isolation2 test cases: lockmodes and gdd/.
      
      The Github issue https://github.com/greenplum-db/gpdb/issues/10030  is gone after this commit.
      855f1548
    • H
      Fix memory accounting (#9739) · 048ca12b
      Hao Wu 提交于
      1. `localAllocated` is not set to 0 when resetting the memory context, which
      makes the statistics data incorrect. 
      2. Size is an unsigned integer, and `MEMORY_ACCOUNT_UPDATE_ALLOCATED`
      uses the opposite value for subtraction. Since they have the same bit width
      and the same bit results. We'd avoid using it in this way.
      3. Fix memory accounting when setting a new parent. The accounting
      parent must be one of the ancestors of the child. Re-setting the parent
      might break the rule, see below:
      
      If we set the new parent to the sibling or the ancestor of the old parent,
      all children's accounting parent may have to change, because
      the old accountingParent may not be the ancestor of the children's AllocSet.
      We must have a loop to find it. For example:
      
      root <- A <- B <- H <- C <- D
                     \
                       <-E
      root <- A <- B <- H
                     \
                       <-E <- C <- D
      
      We want to change the parent of C to E, and both C and D have the B
      to be the accountingParent. After we set the new parent of C to E,
      the accountingParent has to update. But if the new parent of C is B,
      it still satisfies the above rule. So, we don't have to update the accountingParent.
      
      The issue 1 is fixed in PR(https://github.com/greenplum-db/gpdb/pull/10106).
      Reviewed-by: NNing Yu <nyu@pivotal.io>
      048ca12b
  7. 30 6月, 2020 3 次提交
    • (
      Fix a cornor case which dump CaseTestExpr for IS NOT DISTINCT FROM. (#10348) · 5f58e117
      (Jerome)Junfeng Yang 提交于
      For below example:
      ```
      CREATE TABLE mytable2 (
          key character varying(20) NOT NULL,
          key_value character varying(50)
      ) DISTRIBUTED BY (key);
      
      CREATE VIEW aaa AS
      SELECT
          CASE mytable2.key_value
              WHEN IS NOT DISTINCT FROM 'NULL'::text THEN 'now'::text::date
              ELSE to_date(mytable2.key_value::text, 'YYYYMM'::text)
              END AS t
          FROM mytable2;
      
      ```
      
      mytable2.key_value will cast to type date. For clause `(ARG1) IS NOT
      DISTINCT FROM (ARG2)`, this leads ARG1 to become a RelabelType node and
      contains CaseTestExpr node in RelabelType->arg.
      
      So when dumping the view, it'll dump extra `CASE_TEST_EXPR` as below
      ```
      select pg_get_viewdef('notdisview3',false);
                                     pg_get_viewdef
      -----------------------------------------------------------------------------
        SELECT                                                                    +
               CASE mytable2.key_value                                            +
                   WHEN (CASE_TEST_EXPR) IS NOT DISTINCT FROM 'NULL'::text THEN ('now'::text)::date+
                   ELSE to_date((mytable2.key_value)::text, 'YYYYMM'::text)       +
               END AS t                                                           +
          FROM mytable2;
      (1 row)
      ```
      
      I dig into commit a453004e, if left-hand argument for `IS NOT DISTINCT
      FROM` contains any `CaseTestExpr` node, the left-hand arg should be omitted.
      `CaseTestExpr` is a placeholder for CASE expression.
      Reviewed-by: NPaul Guo <paulguo@gmail.com>
      5f58e117
    • P
      Fix assert failure in cdbcomponent_getCdbComponents() (#10355) · 06d7fe0a
      Paul Guo 提交于
      It could be called in utility mode, however we should avoid calling into
      FtsNotifyProber() for such case. Ideally we could do that if we access the
      master node and postgres is no in single mode, but it seems that we do not that
      need.
      
      Here is the stack of the issue I encountered.
      
      0  0x0000003397e32625 in raise () from /lib64/libc.so.6
      1  0x0000003397e33e05 in abort () from /lib64/libc.so.6
      2  0x0000000000b39844 in ExceptionalCondition (conditionName=0xeebac0 "!(Gp_role == GP_ROLE_DISPATCH)", errorType=0xeebaaf "FailedAssertion",
          fileName=0xeeba67 "cdbfts.c", lineNumber=101) at assert.c:66
      3  0x0000000000bffb1e in FtsNotifyProber () at cdbfts.c:101
      4  0x0000000000c389a4 in cdbcomponent_getCdbComponents () at cdbutil.c:714
      5  0x0000000000c26e3a in gp_pgdatabase__ (fcinfo=0x7ffd7b009c10) at cdbpgdatabase.c:74
      6  0x000000000076dbd6 in ExecMakeTableFunctionResult (funcexpr=0x3230fc0, econtext=0x3230988, argContext=0x3232b68, expectedDesc=0x3231df8, randomAccess=0 '\000',
      	    operatorMemKB=32768) at execQual.c:2275
      ......
      18 0x00000000009afeb2 in exec_simple_query (query_string=0x316ce20 "select * from gp_pgdatabase") at postgres.c:1778
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      06d7fe0a
    • D
      Docs - update PXF version info · b5078b99
      David Yozie 提交于
      b5078b99
  8. 29 6月, 2020 5 次提交
  9. 26 6月, 2020 6 次提交
    • L
      docs - update misc xrefs to new version (#10373) · 04fdf55a
      Lisa Owen 提交于
      04fdf55a
    • L
      a796bc69
    • H
      Silence useless NOTICEs emitted by gpcheckcat. · 95ef455d
      Heikki Linnakangas 提交于
      These NOTICEs:
      
          Connected as user 'heikki' to database 'postgres', port '5432', gpdb version '7.0'
          -------------------------------------------------------------------
          Batch size: 4
          Performing test 'duplicate'
          NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column(s) named 'segid' as the Greenplum Database data distribution key for this table.
          HINT:  The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.
          NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column(s) named 'segid' as the Greenplum Database data distribution key for this table.
          HINT:  The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.
          ...
          NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column(s) named 'segid' as the Greenplum Database data distribution key for this table.
          HINT:  The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.
          Total runtime for test 'duplicate': 0:00:06.83
      
          SUMMARY REPORT: PASSED
          ===================================================================
          Completed 1 test(s) on database 'postgres' at 2020-06-25 11:49:19 with elapsed time 0:00:06
          Found no catalog issue
      
      They're useless chatter. Add explicit DISTRIBUTED BY to the CREATE TABLE AS
      commands to silence them.
      Reviewed-by: NAshwin Agrawal <aashwin@vmware.com>
      95ef455d
    • L
      docs - update some pxf xrefs, add excludes (#10360) · aedfe828
      Lisa Owen 提交于
      aedfe828
    • D
      Fix dbid inconsistency on spread mirroring · 8d008792
      Denis Smirnov 提交于
      Mirror registration passes through several steps at the moment:
      1. CREATE_QE_ARRAY (QE_MIRROR_ARRAY is ordered by content)
      2. ARRAY_REORDER (QE_MIRROR_ARRAY is ordered by port)
      3. CREATE_ARRAY_SORTED_ON_CONTENT_ID (form QE_MIRROR_ARRAY_SORTED_ON_CONTENT_ID
         on a base of QE_MIRROR_ARRAY)
      4. REGISTER_MIRRORS (walk through QE_MIRROR_ARRAY, register mirrors with
         pg_catalog.gp_add_segment_mirror on master's gp_segment_configuration
         and update QE_MIRROR_ARRAY with returned dbids)
      5. CREATE_SEGMENT (walk through QE_MIRROR_ARRAY_SORTED_ON_CONTENT_ID with old
         dbids and create mirrors on segment hosts with pg_basebackup)
      The problem is in a step 4 - we update the wrong array (QE_MIRROR_ARRAY instead
      of QE_MIRROR_ARRAY_SORTED_ON_CONTENT_ID). Because of that we get inconsistency
      between mirror dbids on gp_segment_configuration and internal.auto.conf files.
      This can cause inoperable cluster state in some situations when we promote a
      failed primary from a mirror with wrong dbids (FTS can't solve this issue).
      
      Also fixed column indexes in array used for segment arrray ordering.
      It was not done after commit https://github.com/greenplum-db/gpdb/commit/03c7d557720c5a78af1e2574ac385d10a0797f5e
      which prepend array with new hostname column.
      Co-authored-by: NVasiliy Ivanov <ivi@arenadata.io>
      8d008792
    • L
      docs - update pxf migration info and greenplum upgrade info (#10322) · 9d0e3cd0
      Lisa Owen 提交于
      * docs - update pxf migration info and greenplum upgrade info
      
      * post upgrade init not always required
      
      * use procedure step name for xref
      
      * forgot about this page - xref to pxf dita landing page
      
      * fix some links
      
      * xref to PXF packaging README on greenplum oss docs landing page
      
      * redirect install topic to pxf README for oss docs
      
      * pxf oss subnav includes only migration/upgrade topics
      
      * next greenplum minor releases will include embedded pxf
      
      * migrate topic - call pxf directly
      
      * update upgrade subnav title
      
      * use underscore in filename
      9d0e3cd0
  10. 25 6月, 2020 9 次提交
    • S
      Recompile plperl to set the right RUNPATH · cc985a3f
      Shaoqi Bai 提交于
      Currently, GPDB5 is built with --enable-rpath (default configure
      option). For plperl, it's Makefile specifies an absolute path to the
      location of "$(perl -MConfig -e 'print $Config{archlibexp}')/CORE"
      (e.g., /usr/lib64/perl5/CORE on RHEL7). This directory is not on the
      default search path for the runtime linker. Without the proper RUNPATH
      entry, libperl.so cannot be found when postgres tries to load the plperl
      extension.
      
      Without setting correct RUNPATH for plperl.so, will see a error like
      followin:
      ERROR:  could not load library
      "/usr/local/greenplum-db-devel/lib/postgresql/plperl.so": libperl.so:
      cannot open shared object file: No such file or directory
      Authored-by: NShaoqi Bai <bshaoqi@vmware.com>
      (cherry picked from commit a3eeadd5a575820ebc9a401c96b252355f183bb1)
      cc985a3f
    • B
      Remove AIX logic from generate-greenplum-path.sh · 0c9d63e1
      Bradford D. Boyle 提交于
      [#173046174]
      Authored-by: NBradford D. Boyle <bradfordb@vmware.com>
      (cherry picked from commit 28bcd4447551b3b9790e23d7865dbf403f79ef36)
      0c9d63e1
    • S
      Recompile plpython subdir to set the right RUNPATH · de611505
      Shaoqi Bai 提交于
      Authored-by: NShaoqi Bai <sbai@pivotal.io>
      (cherry picked from commit 6b71cd2e0085170139fd1c645696e6e2b8895058)
      de611505
    • T
      Add curly braces for GPHOME var · f9c946ca
      Tingfang Bao 提交于
      Authored-by: NTingfang Bao <baotingfang@gmail.com>
      (cherry picked from commit ae9bcdda4252524401fe6d1b4752355aa66e18ea)
      f9c946ca
    • B
      Using $ORIGIN as RUNPATH for runtime link · da4d7f95
      Bradford D. Boyle 提交于
      When upgrading from GPDB5 to GPDB6, gpupgrade will need to be able to call
      binaries from both major versions. Relying on LD_LIBRARY_PATH is not an option
      because this can cause binaries to load libraries from the wrong version.
      Instead, we need the libraries to have RPATH/RUNPATH set correctly. Since the
      built binaries may be relocated we need to use a relative path.
      
      This commit disables the rpath configure option (which would result in an
      absolute path) and use LDFLAGS to use `$ORIGIN`.
      
      For most ELF files a RUNPATH of `$ORIGIN/../lib` is correct. For pygresql
      python module and the quicklz_compressor extension, the RUNPATH needs to be
      adjusted accordingly. The LDFLAGS for those artifacts can be modified with
      different environment variables PYGRESQL_LDFLAGS and QUICKLZ_LDFLAGS.
      
      We always use `--enable-new-dtags` to set RUNPATH. On CentOS 6, with new dtags,
      both DT_RPATH and DT_RUNPATH are set and DT_RPATH will be ignored.
      
      [#171588878]
      Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io>
      Co-authored-by: NXin Zhang <xzhang@pivotal.io>
      (cherry picked from commit 2eec06b39abe8cb5370e949056f26997b9d02572)
      da4d7f95
    • T
      Update generate-greenplum-path.sh for upgrade package · 5e139b76
      Tingfang Bao 提交于
      Following the [Greenplum Server RPM Packaging Specification][0], we need
      to update greenplum_path.sh file, and ensure many environment variables
      set correct.
      
      There are a few basic requirments for Greenplum Path Layer:
      
      * greenplum-path.sh shall be installed to `${installation
        prefix}/greenplum-db-[package-version]/greenplum_path.sh`
      * ${GPHOME} is set by given parameter, by default it should point to
        `%{installation prefix}/greenplum-db-devel`
      * `${LD_LIBRARY_PATH}` shall be safely set to avoid a trailing colon
        (which will cause the linker to search the current directory when
        resolving shared objects)
      * `${PYTHONPATH}` shall be set to `${GPHOME}/lib/python`
      * `${PATH}` shall be set to `${GPHOME}/bin:${PATH}`
      * If the file `${GPHOME}/etc/openssl.cnf` exists then `${OPENSSL_CONF}`
        shall be set to `${GPHOME}/etc/openssl.cnf`
      * The greenplum_path.sh file shall pass [ShellCheck][1]
      
      [0]: https://github.com/greenplum-db/greenplum-database-release/blob/master/Greenplum-Server-RPM-Packaging-Specification.md#detailed-package-behavior
      [1]: https://github.com/koalaman/shellcheckAuthored-by: NTingfang Bao <bbao@pivotal.io>
      (cherry picked from commit 5150237ac884227383f9f4f94e2383450756e7da)
      5e139b76
    • H
      Fix 'orafce' to work on GPDB. · 0b5113bf
      Heikki Linnakangas 提交于
      The previous commit updated 'orafce' module to new upstream version.
      This commit fixes it so that it works with GPDB. We carried many of these
      diffs against the old version too, but some are new with the new orafce
      version.
      
      Old differences, put back in this commit:
      
      - In the 'finish no data found' NOTICEs, GPDB strips trailing spaces
      
      - GPDB doesn't allow SQL_ASCII as database encoding. Use 'iso-8859-1' in
        the 'nlssort' test instead.
      
      - Don't install dbms_alert. It doesn't work in GPDB.
      
      - GPDB doesn't have 'tts_tableOid' field in TupleTableSlot
      
      New differences:
      
      - Use init_file to suppress extra NOTICEs about distribution keys and such
        that GPDB prints. (Previously, we had changed the expected outputs to
        include the NOTICEs, but this seems nicer.)
      
      - GPDB has built-in DECODE() support, so don't install the orafce decode()
        compatibility functions. The DECODE() is transformed to a CASE-WHEN
        construct, so fix all the regression tests that had 'decode' as the
        result column name now into 'case' instead. A few tests are now erroring
        out because of missing casts between text and other datatypes, but that's
        expected / accepted because the GPDB implementation of DECODE() is
        different.
      0b5113bf
    • H
      Update to orafce version 3.13.4. · 3f0e2baf
      Heikki Linnakangas 提交于
      This replaces the sources in the tree verbatim with upstream version
      3.13.4. It does not work with GPDB as it is, but I wanted to have the GPDB
      changes in separate commit. The next commit will fix it so that it works
      again.
      
      The reason to update it now is that we are working with mergign PostgreSQL
      v12 into GPDB, and the old 3.7 version we had in the repository will not
      work with PostgreSQL v12. This version should work with both PostgreSQL
      9.6 where we are at the moment, and with v12.
      3f0e2baf
    • T
      Fix unit tests for mainUtils · c81d9256
      Tyler Ramer 提交于
      Commit 4ab87e24f61b9aff26738cdb154673e03cd772a8 replaced too many usages
      of "with self.lock as l" with "with self.lock"
      
      This commit corrects the missing "with self.lock as l" problems to
      ensure the unit tests pass
      Authored-by: NTyler Ramer <tramer@vmware.com>
      c81d9256