1. 15 8月, 2018 13 次提交
  2. 14 8月, 2018 16 次提交
    • Y
      update minAllowedPort of gpaddmirrors · fb2a077e
      yanchaozhong 提交于
      Because the default offset is 1000, the default min port is 5432
      (primary port + offset = mirror database port)
      so, the default minAllowedPort should be 6432
      
      minPort = min([seg.getSegmentPort() for seg in gpArray.getDbList()])
      fb2a077e
    • Y
      Add bounds check for the ’-p’ option of gpaddmirrors command · 313abac2
      yanchaozhong 提交于
      The -p parameter in the gpaddmirrors command has no boundary check. If the value is out of range, the cluster system table is updated successfully, but the mirror cannot be started.
      
      $ gpaddmirrors -p 200000
      ...
      gpaddmirrors:node:gp6-[INFO]:-   Primary instance port        = 40300
      ...
      gpaddmirrors:node:gp6-[INFO]:-   Mirror instance port         = 240300
      ...
      gpaddmirrors:node:gp6-[INFO]:-Successfully updated gp_segment_configuration with mirror info
      ...
      gpaddmirrors:node:gp6-[INFO]:-Process results...
      gpaddmirrors:node:gp6-[WARNING]:-Failed to start segment.  The fault prober will shortly mark it as down. Segment: node:/data/mirror/gpseg0:content=0:dbid=3:mode=r:status=d: REASON: PG_CTL failed.
      
      pg_log:
      ,,,,"FATAL","22023","240300 is outside the valid range for parameter ""port"" (1 .. 65535),,,,
      313abac2
    • R
      Move TMGXACT out of PGPROC into a separate array. · 18b8eec1
      Richard Guo 提交于
      Based on the same consideration as in commit ed0b40, this
      is supposed to speed up CreateDistributedSnapshot by reducing
      the number of cache lines needing to be fetched.
      18b8eec1
    • R
      Fix 'stack_base_ptr' issue with '--enable-testutils'. · 0aa9289a
      Richard Guo 提交于
      This fixes  'stack_base_ptr' assertion failure with '--enable-testutils'
      and also revises related codes to keep the same with upstream.
      0aa9289a
    • H
      Fix INSERT RETURNING on partitioned table. · 62401aaa
      Heikki Linnakangas 提交于
      The ResultRelInfos we build for the partitions, in slot_get_partition(),
      don't contain the ProjectInfo needed to execute RETURNING. We need to
      look that up in the parent ResultRelInfo, and when executing it, be
      careful to use the "parent" version of the tuple, the one before
      mapping the columns for the target partition.
      
      Fixes github issue #4735.
      62401aaa
    • P
      Fix memory bug within cdbdisp_get_PQerror · f72429a7
      Pengzhou Tang 提交于
      cdbdisp_get_PQerror create a new error data object and initialize it
      with filename, function values from QE, errdata need a const filename,
      function, it does not copy it in ErrorContext. The problem is filename
      and function was point to a unstable memory, so when edata is used later,
      it may report a SIGSEGV. To resolve this, copy them in the transaction
      context because this error data can only be used inside current transaction.
      f72429a7
    • P
      eagerfree in executor: support index only scan. (#5462) · f445f830
      Paul Guo 提交于
      Index only scan is a new feature in PG9.2 merge. We do not have the support in
      eagerfree related functions.
      f445f830
    • P
      Refine dispatching of COPY command · a1b6b2ae
      Pengzhou Tang 提交于
      Previously, COPY use CdbDispatchUtilityStatement directly to
      dispatch 'COPY' statements to all QEs and then send/receive
      data from primaryWriterGang, this way happens to work because
      primaryWriterGang is not recycled when a dispatcher state is
      destroyed. This seems nasty because the COPY command has finished
      logically.
      
      This commit splits the COPY dispatching logic to two parts to
      make it more reasonable.
      a1b6b2ae
    • P
      Add two helper functions to construct query parms · b8fb0957
      Pengzhou Tang 提交于
      * cdbdisp_buildUtilityQueryParms
      * cdbdisp_buildCommandQueryParms
      b8fb0957
    • P
      Remove cdbdisp_finishCommand · 957629d1
      Pengzhou Tang 提交于
      Previously, cdbdisp_finishCommand did three things:
      1. cdbdisp_checkDispatchResult
      2. cdbdisp_getDispatchResult
      3. cdbdisp_destroyDispatcherState
      
      However, cdbdisp_finishCommand didn't make code cleaner or more
      convenient to use, in contrast, it makes error handling more
      difficult and makes code more complicated and inconsistent.
      
      This commit also reset estate->dispatcherState to NULL to avoid
      re-entry of cdbdisp_* functions.
      957629d1
    • P
      Rename CdbCheckDispatchResult for name convention · 60bd3ab2
      Pengzhou Tang 提交于
      Use cdbdisp_checkDispatchResult instead of CdbCheckDispatchResult
      to be consistent of cdbdisp_* functions.
      60bd3ab2
    • P
      Do not call CdbDispatchPlan() to dispatch nothing · 4f38b425
      Pengzhou Tang 提交于
      Previously, CdbDispatchPlan might be called to dispatch nothing if
      1. init plan is parallel but main plan is not. CdbDispatchPlan is
      still called for main plan.
      2. init plan is not parallel, CdbDispatchPlan is still called for
      init plan.
      
      The reason is DISPATCH_PARALLEL stands for the whole plan include
      main plan and init plans, this commit add ways to seperately tell
      which plan is parallel exactly to avoid unnecessary dispatching.
      4f38b425
    • L
      Upgrade docker Open JDK to 1.8 (#5457) · 318c0c74
      Lav Jain 提交于
      318c0c74
    • K
      Add madlib jobs to generated master pipeline. (#5469) · e1db3846
      kaknikhil 提交于
      PR https://github.com/greenplum-db/gpdb/pull/5432 was merged but the
      master pipeline wasn't recreated from the template. This PR generates
      the master pipeline from the template so that the madlib jobs can be run
      on the master pipeline.
      e1db3846
    • J
      Add madlib_build_gppkg job to master pipeline (#5432) · 0a974e5f
      Jingyi Mei 提交于
      * Add madlib_build_gppkg job to master pipeline
      
      The current gpdb master pipeline fetches madlib gppkg compiled against
      ealier version of gpdb code, installs the gppkg and runs dev check in a
      container with latest gpdb installed. If there is catalog change in gpdb,
      the test will fail.
      
      To solve this issue, we add a job build_madlib_gppkg to compile madlib
      gppkg from soure and pass it to downstream dev check jobs so that madlib is
      always compiled and tested against latest catalog change.
      Co-authored-by: NDomino Valdano <dvaldano@pivotal.io>
      0a974e5f
    • J
      Move distributed_transactions regression test in greenplum_schedule · 255c3c1e
      Jimmy Yih 提交于
      The distributed_transactions test contains a serializable
      transaction. This serializable transaction may intermittently cause
      the appendonly test to fail when run in the same test group. The
      appendonly test runs VACUUM on some appendonly tables and checks that
      last_sequence is nonzero in gp_fastsequence. Serializable transactions
      make concurrent VACUUM operations on appendonly tables exit early.
      
      To fix the contention, let's move the distributed_transactions test to
      another test group.
      
      appendonly test failure diff:
      *** 632,640 ****
         NormalXid |      0 | t        |             0
         NormalXid |      0 | t        |             1
         NormalXid |      0 | t        |             2
      !  NormalXid |      1 | t        |             0
      !  NormalXid |      1 | t        |             1
      !  NormalXid |      1 | t        |             2
        (6 rows)
      
      --- 630,638 ----
         NormalXid |      0 | t        |             0
         NormalXid |      0 | t        |             1
         NormalXid |      0 | t        |             2
      !  NormalXid |      1 | f        |             0
      !  NormalXid |      1 | f        |             1
      !  NormalXid |      1 | f        |             2
        (6 rows)
      
      Repro:
      1: CREATE TABLE heap_table (a int, b int);
      1: INSERT INTO heap_table SELECT i, i FROM generate_series(1,100)i;
      1: CREATE TABLE ao_table WITH (appendonly=true) AS SELECT * FROM heap_table;
      1: SELECT gp_segment_id, * FROM gp_dist_random('gp_fastsequence') WHERE gp_segment_id = 0;
      2: BEGIN ISOLATION LEVEL SERIALIZABLE;
      2: SELECT 1;
      1: VACUUM ao_table; -- VACUUM exits early
      1: SELECT gp_segment_id, * FROM gp_dist_random('gp_fastsequence') WHERE gp_segment_id = 0;
      2: END;
      1: VACUUM ao_table; -- VACUUM completes
      1: SELECT gp_segment_id, * FROM gp_dist_random('gp_fastsequence') WHERE gp_segment_id = 0;
      255c3c1e
  3. 13 8月, 2018 9 次提交
  4. 11 8月, 2018 2 次提交