1. 13 1月, 2018 6 次提交
  2. 12 1月, 2018 9 次提交
    • H
      Schema-qualify internal queries used by ANALYZE. · b1d4fb4d
      Heikki Linnakangas 提交于
      To avoid being confused by a user-created function called "sum".
      
      Fixes github issue #4185.
      b1d4fb4d
    • H
      a106aa15
    • H
      Remove Debug_print_semaphore_detail GUC. · c7b19371
      Heikki Linnakangas 提交于
      It wasn't very useful. The semaphore code is inherited from upstream, and
      not likely to break any time soon. If you need this information during
      debugging, use a debugger.
      c7b19371
    • L
      Remove PXF regression tests from master pipeline (#4279) · d716f8c9
      Lav Jain 提交于
      * Remove PXF regression tests from master pipeline
      * Change regression_pxf test file name
      * Incorporate feedback
      * Update pxf_tarball location
      d716f8c9
    • C
      Bump gpbackup version to alpha 3 · cf084291
      Chris Hajas 提交于
      Author: Chris Hajas <chajas@pivotal.io>
      Author: Karen Huddleston <khuddleston@pivotal.io>
      cf084291
    • M
      Revert "gpstart: fix OOM issue" · 1494ab2d
      Mike Roth 提交于
      This reverts commit a0fcfc37.
      1494ab2d
    • S
      Fix Filter required properties for correlated subqueries in ORCA · 59abec44
      Shreedhar Hardikar 提交于
      This commit brings in ORCA changes that ensure that a Materialize node is not
      added under a Filter when its child contains outer references.  Otherwise, the
      subplan is not rescanned (because it is under a Material), producing wrong
      results. A rescan is necessary for it evaluates the subplan for each of the
      outer referenced values.
      
      For example:
      
      ```
      SELECT * FROM A,B WHERE EXISTS (
        SELECT * FROM E WHERE E.j = A.j and B.i NOT IN (
          SELECT E.i FROM E WHERE E.i != 10));
      ```
      
      For the above query ORCA produces a plan with two nested subplans:
      
      ```
      Result
        Filter: (SubPlan 2)
        ->  Gather Motion 3:1
              ->  Nested Loop
                    Join Filter: true
                    ->  Broadcast Motion 3:3
                          ->  Table Scan on a
                    ->  Table Scan on b
        SubPlan 2
          ->  Result
                Filter: public.c.j = $0
                ->  Materialize
                      ->  Result
                            Filter: (SubPlan 1)
                            ->  Materialize
                                  ->  Gather Motion 3:1
                                        ->  Table Scan on c
                            SubPlan 1
                              ->  Materialize
                                    ->  Gather Motion 3:1
                                          ->  Table Scan on c
                                                Filter: i <> 10
      ```
      
      The Materialize node (on top of Filter with Subplan 1) has cdb_strict = true.
      The cdb_strict semantics dictate that when the Materialize is rescanned,
      instead of destroying its tuplestore, it resets the accessor pointer to the
      beginning and the subtree is NOT rescanned.
      So the entries from the first scan are returned for all future calls; i.e. the
      results depend on the first row output by the cross join. This causes wrong and
      non-deterministic results.
      
      Also, this commit reinstates this test in qp_correlated_query.sql. It also
      fixes another wrong result caused by the same issue. Note that the changes in
      rangefuncs_optimizer.out are because ORCA now no longer falls back for those
      queries. Instead it produces a plan which is executed on master (instead of the
      segments as was done by planner) which changes the error messages.
      
      Also bump ORCA version to 2.53.8.
      Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
      59abec44
    • S
      gpstart: fix OOM issue · a0fcfc37
      Shoaib Lari 提交于
      gpstart did a cluster-wide check of heap_checksum settings and refused
      to start the cluster if this setting was inconsistent. This meant a
      round of ssh'ing across the cluster which was causing OOM errors with
      large clusters.
      
      This commit moves the heap_checksum validation to gpsegstart.py, and
      changes the logic so that only those segments which have the same
      heap_checksum setting as master are started.
      
      Author: Nadeem Ghani <nghani@pivotal.io>
      Author: Shoaib Lari <slari@pivotal.io>
      a0fcfc37
    • D
      docs - typo fix · 0310216b
      dyozie 提交于
      0310216b
  3. 11 1月, 2018 7 次提交
  4. 10 1月, 2018 6 次提交
  5. 09 1月, 2018 4 次提交
    • S
      gppylib: refactor SegmentPair to not support multiple mirrors · a19f7327
      Shoaib Lari 提交于
      Long ago, we thought we might need to support multiple mirrors. But we
      don't, and don't forsee it coming soon. Simplify the code to only ever
      have one mirror, but still allow for the possibility of no mirrors
      
      Author: Shoaib Lari <slari@pivotal.io>
      Author: C.J. Jameson <cjameson@pivotal.io>
      a19f7327
    • M
      gppylib: Rename gpArray variables and classes · bbc47080
      Marbin Tan 提交于
      The gpArray use of GpDB and Segment classes was confusing. This change renames
      GpDB to Segment and Segment to SegmentPair to clarify usage. Its a big diff, but
      a simple, repeating change.
      
      Author: Shoaib Lari <slari@pivotal.io>
      Author: Marbin Tan <mtan@pivotal.io>
      Author: C.J. Jameson <cjameson@pivotal.io>
      bbc47080
    • L
      Have a separate docker file for gpadmin user (#4236) · 53278041
      Lav Jain 提交于
      * Have a separate docker file for gpadmin user
      
      * Add indent package for centos6
      53278041
    • J
      isolation2: stop littering <stdout>.pid files on failure · cbab1ee6
      Jacob Champion 提交于
      The temporary file hack to deal with pygresql output oddities didn't
      ensure that those files were always removed. Instead of a named file we
      don't need, just use a TemporaryFile that will go away as soon as we
      release it.
      
      Added a FIXME to revisit this once we upgrade pygresql.
      cbab1ee6
  6. 08 1月, 2018 1 次提交
    • H
      Update pg_proc.sql for changes made to generated file pg_proc_gp.h. · e942ccdc
      Heikki Linnakangas 提交于
      pg_proc_gp.h is generated from pg_proc.sql, but a few recent commits
      updated only pg_proc_gp.h, not pg_proc.sql. As a result, if you ran the
      perl script, the gp_replication_error() function vanished, and the obsolete
      PT verification functions reappeared. Update pg_proc_gp.h, so that
      pg_proc_gp.h is reproduced by the perl script again.
      e942ccdc
  7. 06 1月, 2018 5 次提交
  8. 05 1月, 2018 2 次提交
    • P
      Minimize the race condition in BackoffSweeper() · ab74e1c6
      Pengzhou Tang 提交于
      There is a long-standing race condition in BackoffSweeper() which
      triggers an error and then triggers another assertion failure for
      not reset sweeperInProgress to false.
      
      This commit doesn't resolve the race condition fundamentally with
      lock or other implementation, because the whole backoff mechanism
      did not ask for accurate control, so skipping some sweeps should
      be fine so far. We also downgrade the log level to DEBUG because
      a restart of sweeper backend is unnecessary.
      ab74e1c6
    • K
      e8dc3ea4