1. 25 6月, 2020 7 次提交
    • B
      Remove AIX logic from generate-greenplum-path.sh · 800d1f07
      Bradford D. Boyle 提交于
      [#173046174]
      Authored-by: NBradford D. Boyle <bradfordb@vmware.com>
      800d1f07
    • S
      Recompile plpython subdir to set the right RUNPATH · d7a8a2a2
      Shaoqi Bai 提交于
      Authored-by: NShaoqi Bai <sbai@pivotal.io>
      d7a8a2a2
    • T
      Add curly braces for GPHOME var · 2875e8c4
      Tingfang Bao 提交于
      Authored-by: NTingfang Bao <baotingfang@gmail.com>
      2875e8c4
    • B
      Using $ORIGIN as RUNPATH for runtime link · c6bd54a4
      Bradford D. Boyle 提交于
      When upgrading from GPDB5 to GPDB6, gpupgrade will need to be able to call
      binaries from both major versions. Relying on LD_LIBRARY_PATH is not an option
      because this can cause binaries to load libraries from the wrong version.
      Instead, we need the libraries to have RPATH/RUNPATH set correctly. Since the
      built binaries may be relocated we need to use a relative path.
      
      This commit disables the rpath configure option (which would result in an
      absolute path) and use LDFLAGS to use `$ORIGIN`.
      
      For most ELF files a RUNPATH of `$ORIGIN/../lib` is correct. For pygresql
      python module and the quicklz_compressor extension, the RUNPATH needs to be
      adjusted accordingly. The LDFLAGS for those artifacts can be modified with
      different environment variables PYGRESQL_LDFLAGS and QUICKLZ_LDFLAGS.
      
      We always use `--enable-new-dtags` to set RUNPATH. On CentOS 6, with new dtags,
      both DT_RPATH and DT_RUNPATH are set and DT_RPATH will be ignored.
      
      [#171588878]
      Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io>
      Co-authored-by: NXin Zhang <xzhang@pivotal.io>
      c6bd54a4
    • T
      Update generate-greenplum-path.sh for upgrade package · d02cb625
      Tingfang Bao 提交于
      Following the [Greenplum Server RPM Packaging Specification][0], we need
      to update greenplum_path.sh file, and ensure many environment variables
      set correct.
      
      There are a few basic requirments for Greenplum Path Layer:
      
      * greenplum-path.sh shall be installed to `${installation
        prefix}/greenplum-db-[package-version]/greenplum_path.sh`
      * ${GPHOME} is set by given parameter, by default it should point to
        `%{installation prefix}/greenplum-db-devel`
      * `${LD_LIBRARY_PATH}` shall be safely set to avoid a trailing colon
        (which will cause the linker to search the current directory when
        resolving shared objects)
      * `${PYTHONHOME}` shall be set to `${GPHOME}/ext/python`
      * `${PYTHONPATH}` shall be set to `${GPHOME}/lib/python`
      * `${PATH}` shall be set to `${GPHOME}/bin:${PYTHONHOME}/bin:${PATH}`
      * If the file `${GPHOME}/etc/openssl.cnf` exists then `${OPENSSL_CONF}`
        shall be set to `${GPHOME}/etc/openssl.cnf`
      * The greenplum_path.sh file shall pass [ShellCheck][1]
      
      [0]: https://github.com/greenplum-db/greenplum-database-release/blob/master/Greenplum-Server-RPM-Packaging-Specification.md#detailed-package-behavior
      [1]: https://github.com/koalaman/shellcheckCo-authored-by: NTingfang Bao <bbao@pivotal.io>
      Co-authored-by: NXin Zhang <zhxin@vmware.com>
      Co-authored-by: NNing Wu <ningw@vmware.com>
      Co-authored-by: NShaoqi Bai <bshaoqi@vmware.com>
      d02cb625
    • H
      Improve handling of target lists of window queries · d3fcb525
      Hans Zeller 提交于
      Fixing two bugs related to handling queries with window functions and
      refactoring the related code.
      
      ORCA can't handle expressions on window functions like rank() over() -
      1 in a target list. To avoid these, we split Query blocks that contain
      them into two. The new lower Query computes the window functions, the
      new upper Query computes the expressions.
      
      We use three mutators and walkers to help with this process:
      
      Increase the varlevelsup of outer references in the new lower Query,
      since we now inserted a new scope above it.  Split expressions on
      window functions into the window functions (for the lower scope) and
      expressions with a Var substituted for the WindowFunc (for the upper
      scope). Also adjust the varattno for Vars that now appear in the upper
      scope.  Increase the ctelevelsup for any RangeTblEntrys in the lower
      scope.  The bugs we saw were related to these mutators. The second one
      didn't recurse correctly into the required types of subqueries, the
      third one didn't always increment the query level correctly. The
      refactor hopefully will simplify this code somewhat. For details, see
      individual commit messages.
      
      Note: In addition to cherry-picking from the master branch, also
      removed the temporary check that triggers a fallback to planner when
      we see window queries with outer refs in them. See #10265.
      
      * Add test cases
      * Refactor: Renaming misc variables and methods
      * Refactor RunIncrLevelsUpMutator
      
      Made multiple changes to how we use the mutator:
      
      1. Start the call with a method from gpdbwrappers.h, for two reasons:
         a) execute the needed wrapping code for GPDB calls
         b) avoid calling the walker function on the top node, since we don't
            want to increment the query level when we call the method on a
            query node
      
      2. Now that we don't have to worry anymore about finding a top-level
         query node, simplify the logic to recurse into subqueries by simply
         doing that when we encounter a Query node further down. Remove the
         code dealing with sublinks, RTEs, CTEs.
      
      3. From inside the walker functions, call GPDB methods without going
         through the wrapping layer again.
      
      4. Let the mutator code make a copy of the target entry instead of
         creating one before calling the mutator.
      
      * Refactor RunWindowProjListMutator, fix bug
      
      Same as previous commit, this time RunWindowProjListMutator gets refactored.
      This change also should fix one of the bugs we have seen, that this
      mutator did not recurse into derived tables that were inside scalar
      subqueries in the select list.
      
          Made multiple changes to how we use the mutator:
      
          1. Start the call with a method from gpdbwrappers.h, for two reasons:
             a) execute the needed wrapping code for GPDB calls
             b) avoid calling the walker function on the top node, since we don't
                want to increment the query level when we call the method on a
                query node
      
          2. Now that we don't have to worry anymore about finding a top-level
             query node, simplify the logic to recurse into subqueries by simply
             doing that when we encounter a Query node further down. Remove the
             code dealing with sublinks, RTEs, CTEs.
      
          3. From inside the walker functions, call GPDB methods without going
             through the wrapping layer again.
      
          4. Let the mutator code make a copy of the target entry instead of
             creating one before calling the mutator.
      
      * Refactor RunFixCTELevelsUpMutator, fix bug
      
      Converted this mutator into a walker, since only walkers visit RTEs, which
      makes things a lot easier.
      
      Fixed a bug where we incremented the CTE levels for scalar subqueries
      that went into the upper-level query.
      
      Otherwise, same types of changes as in previous two commits.
      
      * Refactor and reorder code
      
      Slightly modified the flow in methods CQueryMutators::ConvertToDerivedTable
      and CQueryMutators::NormalizeWindowProjList
      
      * Remove obsolete methods
      * Update expected files
      
      See  https://github.com/greenplum-db/gpdb/pull/10309
      
      (cherry picked from commit 33c4582e)
      d3fcb525
    • H
  2. 24 6月, 2020 6 次提交
  3. 23 6月, 2020 4 次提交
    • (
      Fix flaky appendonly test. (#10349) · 6a730079
      (Jerome)Junfeng Yang 提交于
      This fix the error:
      ```
      ---
      /tmp/build/e18b2f02/gpdb_src/src/test/regress/expected/appendonly.out
      2020-06-16 08:30:46.484398384 +0000
      +++ /tmp/build/e18b2f02/gpdb_src/src/test/regress/results/appendonly.out
      2020-06-16 08:30:46.556404454 +0000
      @@ -709,8 +709,8 @@
         SELECT oid FROM pg_class WHERE relname='tenk_ao2'));
             case    | objmod | last_sequence | gp_segment_id
              -----------+--------+---------------+---------------
            + NormalXid |      0 | 1-2900        |             1
              NormalXid |      0 | >= 3300       |             0
            - NormalXid |      0 | >= 3300       |             1
              NormalXid |      0 | >= 3300       |             2
              NormalXid |      1 | zero          |             0
              NormalXid |      1 | zero          |             1
      ```
      
      The flaky is because of the orca `CREATE TABLE` statement without
      `DISTRIBUTED BY` will treat the table as randomly distributed.
      But the planner will treat as distributed by the table's first column.
      
      ORCA:
      ```
      CREATE TABLE tenk_ao2 with(appendonly=true, compresslevel=0,
      blocksize=262144) AS SELECT * FROM tenk_heap;
      NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause. Creating a NULL
      policy entry.
      ```
      
      Planner:
      ```
      CREATE TABLE tenk_ao2 with(appendonly=true, compresslevel=0,
      blocksize=262144) AS SELECT * FROM tenk_heap;
      NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column(s)
      named 'unique1' as the Greenplum Database data distribution key for this
      table.
      ```
      
      So the data distribution for table tenk_ao2 is not as expected.
      Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
      6a730079
    • Z
      Make cdbpullup_missingVarWalker also consider PlaceHolderVar. · 5750a0f2
      Zhenghua Lyu 提交于
      When planner adds a redistribute motion above this subplan, planner
      will invoke `cdbpullup_findEclassInTargetList` to make sure the
      distkey can be computed based on subplan's targetlist. When the distkey
      is an expression based on some PlaceholderVar elements in targetlist,
      the function `cdbpullup_missingVarWalker` does not handle it correctly.
      
      For example, when distkey is:
      
      ```sql
      CoalesceExpr [coalescetype=23 coalescecollid=0 location=586]
              [args]
                      PlaceHolderVar [phrels=0x00000040 phid=1 phlevelsup=0]
                              [phexpr]
                                      CoalesceExpr [coalescetype=23 coalescecollid=0 location=49]
                                              [args] Var [varno=6 varattno=1 vartype=23 varnoold=6 varoattno=1]
      ```
      
      and targetlist is:
      
      ```
      TargetEntry [resno=1]
              Var [varno=2 varattno=1 vartype=23 varnoold=2 varoattno=1]
      TargetEntry [resno=2]
              Var [varno=2 varattno=2 vartype=23 varnoold=2 varoattno=2]
      TargetEntry [resno=3]
              PlaceHolderVar [phrels=0x00000040 phid=1 phlevelsup=0]
                      [phexpr]
                              CoalesceExpr [coalescetype=23 coalescecollid=0 location=49]
                                      [args] Var [varno=6 varattno=1 vartype=23 varnoold=6 varoattno=1]
      TargetEntry [resno=4]
              PlaceHolderVar [phrels=0x00000040 phid=2 phlevelsup=0]
                      [phexpr]
                              CoalesceExpr [coalescetype=23 coalescecollid=0 location=78]
                                      [args] Var [varno=6 varattno=2 vartype=23 varnoold=6 varoattno=2]
      ```
      
      Previously only consider Var leads to `cdbpullup_missingVarWalker` fail.
      
      See Github issue: https://github.com/greenplum-db/gpdb/issues/10315 for
      details.
      
      This commit fixes the issue by considering PlaceHolderVar in function
      `cdbpullup_missingVarWalker`.
      5750a0f2
    • B
      Fix pgupgrade unit tests · a4fffa3d
      Bhuvnesh Chaudhary 提交于
      - Add --disable-gpcloud configure flag as its not required for testing
        pg_upgrade
      - Remove -fsanitize=address
      a4fffa3d
    • D
      Docs - updating bookb build dependencies · b6d4e444
      David Yozie 提交于
      b6d4e444
  4. 20 6月, 2020 1 次提交
  5. 19 6月, 2020 2 次提交
    • P
      Avoid generating core files during testing. (#10304) · 88ed7331
      Paul Guo 提交于
      We had some negative tests that need to panic and thus generating core files
      finally if the system is configured with corefile dump. Long ago we did
      optimization to avoid generating core files in some cases. Now we found other
      new scenarios that could be further optimized.
      
      1. avoid core file generation with setrlimit() in the FATAL fault inject cod.
      Some times FATAL is upgraded to PANIC (e.g.  critical section, fail when doing
      QD prepare related work). So we could avoid generating core file for this
      scenario also. Note even if the FATAL is not upgraded, it's fine mostly to
      avoid core file generation since the process will quit soon.  With the code
      change, We avoid two core files from test isolation2:crash_recovery_dtm.
      
      2. We previously had sanity check dbid/segidx in QE:HandleFtsMessage(), and
      panic if there is inconsistency when cassert is enabled, but it seems that we
      really do not need to panic since the root cause of the failure is quite
      straightforward, and the call stack is quite simple: PostgresMain() ->
      HandleFtsMessage(), and also that part of code does not invovle shared memory
      so no need to worry about shared memory mess (else we might want a core file to
      check). Downgrading the log level to FATAL. This avoids 6 core files from test
      isolation2:segwalrep/recoverseg_from_file.
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      
      Cherry-picked from 4a61357c
      88ed7331
    • M
      docs - add views pg_stat_all_tables and indexes - 6X (#10227) · 50939b5c
      Mel Kiyama 提交于
      * docs - add views pg_stat_all_tables and indexes - 6X
      
      The views currently display access statistics only from master.
      Add 6.x specific DDL for views that display access statistics from master and segments.
      
      Also add some statistics GUCs.
      --track_activities
      --track_counts
      
      * docs - clarify seq_scan and idx_scan refer to the total number of scans from all segments
      
      * docs - minor edits.
      50939b5c
  6. 18 6月, 2020 3 次提交
    • A
      Distinguish between tuple deleted by split udpate and normal delete · 64b8664b
      Asim R P 提交于
      Split update (distribution key update) in Greenplum is implemented by
      deleting a tuple from original segment and inserting it into new
      segment.  This is similar to an update to partition key column in
      upstream PostgreSQL, which is handled by commit f16241be.  This
      patch cherry-picks relevant parts of that commit.  The patch helps to
      distinguish a tuple deleted as part of split update from a tuple
      deleted by a regular delete operation.  Concurrent attempts to update or
      delete a split-updated tuple need to be aware that the update chain for
      this tuple cannot be traced because the updated tuple no longer resides
      in the same heap table (it's moved to another segment).  Concurrent
      updates to a split-updated tuple now result in an error.  Previously,
      this case was treated like a delete and it would lead to wrong results.
      
      Main difference between the upstream commit and this patch is the tuple
      header change is not recorded in WAL.  Mirrors see this as regular heap
      delete during replay and crash recovery would also replay it as regular
      heap delete.  That is, the tuple header change is treated like a hint
      bit change.  Tuple masking logic used by replica check and WAL
      consistency checks is updated to mask the tuple header change.  We think
      such a change is OK to make in Greenplum 6 because it only affects
      concurrently running transactions.  Tuples affected by it will be seen
      as deleted once the split-update transaction completes.  Therefore,
      binary compatibility, including downgrade to older minor version, is not
      affected.
      
      Note that moving tuples from one partition to another is prohibited in
      Greenplum 6.  That's why we can repurpose the trick from upstream for
      split-update.
      
      Fixes GitHub issue #8919.
      
      Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/kET4SJ1AHBA/gauiHO2WAwAJCo-authored-by: NZhenghua Lyu <kainwen@gmail.com>
      Reviewed-by: NGang Xiong <gxiong@pivotal.io>
      64b8664b
    • D
      Init gpmon packet for dynamic scan states (#10046) · abb1032d
      Denis Smirnov 提交于
      Dynamic scans use scan states to initialize and keep sidecar nodes.
      It is done bypassing ExecInitNode that normally does gpmon packet
      initialization. As a result gpmon is not initialized for all
      dynamic sidecar nodes that causes "bad magic 0" warnings.
      abb1032d
    • M
      docs - update postGIS 2.5.4 docs (#10297) · d5685b9f
      Mel Kiyama 提交于
      * docs - update postGIS 2.5.4 docs
      
      Updates for Greenplum PostGIS 2.5.4 v2
      
      --Add list of PostGIS extensions
      --Add support for PostGIS TIGER geocoder, address standardizer and address rules files.
      --Update install/uninstall instructions to use CREATE EXTENSION command
      --Remove postgis_manager.sh script
      --Remove PostGIS Raster limitation.
      
      * docs updated PostGIS 2.5.4 docs based on review comments.
      
      * docs - removed postgis_raster extension.
      
      * docs - review comment updates  -Added section for installing the PostGIS package
      -Updated section on removing PostGIS package
      -Fix typos.
      
      * docs - updated platform requirements for PostGIS 2.5.4 v2
      -also removed "beta" from GreenplumR
      d5685b9f
  7. 17 6月, 2020 3 次提交
    • D
      Docs - remove blacklist/whitelist terminology · 748ab20d
      David Yozie 提交于
      748ab20d
    • Z
      GetLatestSnapshot on QEs always return without distributed snapshot. · 58ae7ec0
      Zhenghua Lyu 提交于
      Greenplum tests the visibility of heap tuples firstly using
      distributed snapshot. Distributed snapshot is generated on
      QD and then dispatched to QEs. Some utility statement needs
      to work under the latest snapshot when executing, so that they
      invoke the function `GetLatestSnapshot` in QEs. But remember
      we cannot get the latest distributed snapshot.
      
      Subtle cases are: Alter Table or Alter Domain statements on QD
      get snapshot in Portal Run and then try to hold locks on the
      target table in ProcessUtilitySlow. Here is the key point:
        1. try to hold lock ==> it might be blocked by other transactions
        2. then it will be waked up to continue
        3. when it can continue, the world has changed because other transactions
           then blocks it has been over
      
      Previously, on QD we do not getsnapshot before we dispatch utility
      statement to QEs which leads to the distributed snapshot does not
      reflect the "world change". This will lead to some bugs. For example,
      if the first transaction is to rewrite the whole heap, and then
      the second Alter Table or Alter Domain statements continues with
      the distributed snapshot that txn1 does not commit yet, it will
      see no tuples in the new heap!
      
      This commit fixes the issue by getting a local snapshot when
      invoking `GetLatestSnapshot` when in QEs.
      
      See Github issue: https://github.com/greenplum-db/gpdb/issues/10216Co-authored-by: NHubert Zhang <hzhang@pivotal.io>
      Co-authored-by: NHubert Zhang <hzhang@pivotal.io>
      58ae7ec0
    • (
      Increase retry count for pg_rewind tests' replication promotion and streaming.(#10299) · 35236eb6
      (Jerome)Junfeng Yang 提交于
      Increase the retry count to prevent test failed. Most of the time, the
      failure is because slow processing.
      35236eb6
  8. 16 6月, 2020 7 次提交
    • Z
      Fix lateral PANIC issue when subquery contain limit or groupby. · 300d3c19
      Zhenghua Lyu 提交于
      Previous commit 62579728 fixes a lateral panic issue but does
      not handle all the bad cases because it only check if the query
      tree contains limit clause. Bad cases for example: if the subquery
      is like `q1 union all (q2 limit 1)` then the whole query tree
      does not contain limit clause.
      
      Another bad case is the lateral subquery may contain groupby.
      like:
      
          select * from t1_lateral_limit t1 cross join lateral
          (select (c).x+t2.a, sum(t2.a+t2.b) from t2_lateral_limit t2
           group by (c).x+t2.a)x;
      
      When planning the lateraled subquery we do not know where is
      the param in the subquery's query tree. Thus it is a bit complicated
      to precisely and efficiently resolve this issue.
      
      This commit adopts a simple method to fix panic issue: it justs
      check the subquery's query tree to see if there is any group-by
      or limit clause, if so, force gather each relation and materialize
      them. This is not the best plan we might get. But let's make it
      correct first and I think in future we should seriously consider
      how to fully and efficiently support lateral.
      300d3c19
    • P
      Fix a startup process hang issue when primary node has not yet... · ab69cf9e
      Paul Guo 提交于
      Fix a startup process hang issue when primary node has not yet committed/aborted prepare xlog before shutdown checkpoint. (#10164)
      
      On primary if there is prepared transaction that is not committed or aborted,
      restarting it with the fast mode could lead to startup process hang. That's
      because it was cleanly shutdown so there is no recovery, however some variables
      which are used by PrescanPreparedTransactions() are only initialized during
      recovery.
      
      The stack of the hanging startup process is as below:
      
      0x0000000000bfb756 in pg_usleep (microsec=1000) at pgsleep.c:56
      0x00000000005668ad in read_local_xlog_page (state=0x2a9c580, targetPagePtr=201326592, reqLen=32768, targetRecPtr=201571064, cur_page=0x2ab32e0 "\a",
          pageTLI=0x2a9c5c0) at xlogutils.c:829
      0x00000000005646ef in ReadPageInternal (state=0x2a9c580, pageptr=201555968, reqLen=15128) at xlogreader.c:503
      0x0000000000563f86 in XLogReadRecord (state=0x2a9c580, RecPtr=201571064, errormsg=0x7ffe895409d8) at xlogreader.c:226
      0x000000000054c0e0 in PrescanPreparedTransactions (xids_p=0x0, nxids_p=0x0) at twophase.c:1696
      0x0000000000559dcc in StartupXLOG () at xlog.c:7595
      0x00000000008e6c2f in StartupProcessMain () at startup.c:242
      Reviewed-by: NGang Xiong <gxiong@pivotal.io>
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      ab69cf9e
    • H
      Fix flaky test exttab1 and pxf_fdw · 2a41201c
      Hubert Zhang 提交于
      The flaky case happens when select an external table with option
      "fill missing fields". By gdb the qe, this value is not false
      on QE sometimes. When ProcessCopyOptions, we use intVal(defel->arg)
      to parse the boolean value, which is not correct. Using defGetBoolean
      to replace it.
      Also fix a pxf_fdw test case, which should set fill_missing_fields to
      true explicitly.
      
      cherry pick from: f154e5
      2a41201c
    • P
      Retry more for replication synchronization waiting to avoid isolation2 test flakiness. (#10281) · 6d829d98
      Paul Guo 提交于
      Some test cases have been failing due to too few retries. Let's increase them and also
      create some common UDF for use.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      
      Cherry-picked from ca360700
      6d829d98
    • P
      Fix flakiness of "select 1" output after master reset due to injected panic... · 8f753424
      Paul Guo 提交于
      Fix flakiness of "select 1" output after master reset due to injected panic fault before_read_command (#10275)
      
      Several tests inject panic in before_read_command to trigger master reset.
      Previous we run "select 1" after the fault inject query to verify, but the
      output is not deterministic sometimes. i.e. sometimes we do not see the line
      
      PANIC:  fault triggered, fault name:'before_read_command' fault type:'panic'
      
      This was actually observed in test crash_recovery_redundant_dtx per commit
      message and test comment.  It ignores the output of "select 1", but probably
      we still want the output to verify the fault is encountered.
      
      It's still mysterious why sometimes the PANIC message is missing. I spent some
      time on digging but reckon that I can not root cause in short time. One guess
      is that the PANIC message was although sent to the frontend in errfinish() but
      the kernel buffer-ed data was dropped after abort() due to ereport(PANIC);
      Another guess is something wrong related to libpq protocol (not saying it's a
      libpq bug).  In any case, it does not deserve much time to work on the tests
      only, so simply mask the PANIC message to make the test result deterministic
      and also not affect the test purpose.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      
      Cherry-picked from 02ad1fc4
      
      Note test segwalrep/dtx_recovery_wait_lsn was not backported so the change
      on the test is not cherry-picked.
      8f753424
    • J
      Add a new line feed and fix a bad file name · 0919667b
      J·Y 提交于
      0919667b
    • C
      Fix ICW test if GPDB compiled without ORCA · 01cf8634
      Chris Hajas 提交于
      We need to ignore the output when enabling/disabling an Orca xform, as
      if the server is not compiled with Orca there will be a diff (and we
      don't really care about this output).
      
      Additionally, clean up unnecessaary/excessive setting of GUCs
      
      Some of these gucs were on by default or only intended for a specific
      test. Explicitly setting them caused them to appear at the end of
      `explain verbose` plans, making the expected output more difficult to
      match with if the server was built with/without Orca.
      01cf8634
  9. 15 6月, 2020 1 次提交
    • H
      Fix flaky test terminate_in_gang_creation · 6584b264
      Hubert Zhang 提交于
      The test case restarts all primaries and expects the old session
      would fail for the next query since gangs are cached.
      But the restart may last more than 18s which is the max idle
      time QEs could exist. In this case, the new query in the old
      session will just fetch a new gang without expected errors.
      Just set gp_vmem_idle_resource_timeout to 0 to fix this flaky test.
      cherry-pick from: 63b5adf9Reviewed-by: NPaul Guo <pguo@pivotal.io>
      6584b264
  10. 12 6月, 2020 1 次提交
  11. 11 6月, 2020 3 次提交
    • D
      Fix bitmap segfault while portal clean · c9dc4729
      Denis Smirnov 提交于
      Currently when we finish a query with a bitmap index scan and
      destroy its portal, we release bitmap resources in a wrong order.
      First of all we should release bitmap iterator (a bitmap wrapper)
      and only after that close down subplans (bitmap index scan with an
      allocated bitmap). Before current commit this operations were done
      in a reverse order that causes access to a freed bitmap in a
      iterator closing function.
      
      Underhood pfree() is a malloc's free() wrapper. Free() doesn't
      return memory to OS in most cases or even doesn't immediately
      corrupt data in a freed chunk, so it is possible to access freed
      chunk data right after its deallocation. That is why we can get
      segfault only under concurrent workloads when malloc's arena
      returns memory to OS.
      c9dc4729
    • C
      Change branch for test_gpdb_6X slack command · fa45da99
      Chris Hajas 提交于
      Previously, we used the same slack command for master and 6X. Now we
      have separate slack commands, and need to look in separate branches.
      fa45da99
    • L
      docs - graph analytics new page (#10138) · de367f7d
      Lena Hunter 提交于
      * clarifying pg_upgrade note
      
      * graph edits
      
      * graph analytics updates
      
      * menu edits and code spacing
      
      * graph further edits
      
      * insert links for modules
      de367f7d
  12. 10 6月, 2020 2 次提交