1. 27 1月, 2017 3 次提交
    • N
      Make gpfaultinjector less noisy and update related icg tests · 554b878f
      Nikos Armenatzoglou 提交于
      Closes #1606
      Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
      554b878f
    • A
      Locking on sequence relation is not required for sequence server · 957d93af
      Abhijit Subramanya 提交于
      Postgres merge introduced init_sequence() which can optionally lock sequence
      relation.
      
      In GPDB, single sequence server instance is used to generate sequence values
      for all requests coming from segments, hence it doesn't require a lock on
      sequence relation.
      
      There are three concurrent scenarios when using sequence:
      Scenario A: concurrent requests from segments:
      create table t1 (c int, d serial) distributed by (c);
      insert into t1 select i from generate_series(1, 100) i;
      
      Scenario B: concurrent requests from master:
      tx1: select nextval('t1_c_seq'::regclass);
      tx2: select nextval('t1_c_seq'::regclass);
      
      Scenario C: concurrent requests from both master and segments
      tx1: select nextval('t1_c_seq'::regclass);
      tx2: insert into t1 values (200, default);
      
      Scenario A is protected by the single instance of sequence server.
      Scenario B and C are protected by the BUFFER_LOCK_EXCLUSIVE on shared
      buffer of the sequence relation.
      
      With that said, we don't need to hold additional lock on sequence relation.
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      957d93af
    • V
      Use ForEach to iterate lists of GPDB lists inside the translators · 2620cd78
      Venkatesh Raghavan 提交于
      In PR #1585 Heikki suggested we replace OidListNth() in
      PdrgpmdidResolvePolymorphicTypes().
      
      Also verified that in all other places inside PQO related translators we
      always use ForEach to iterate over the list.i
      2620cd78
  2. 26 1月, 2017 8 次提交
    • J
      Update persistent table catalog functions. · 9da45b62
      Jimmy Yih 提交于
      We recently updated some persistent table fields to remove
      previous_free_tid from gp_persistent_* tables and add tablespace oid
      to gp_relation_node. However, the persistent table catalog functions
      were not updated alongside the changes. This commit updates the
      functions to use the current schema.
      
      Commit references:
      https://github.com/greenplum-db/gpdb/commit/a56c032b7bb4926081828cb00d99909aa871e9c9
      https://github.com/greenplum-db/gpdb/commit/8fe321aff600d0b52d4d77fafc23d3292109d3ec
      
      Reported by Christopher Hajas.
      9da45b62
    • D
      Updating end-user docs with most recent 4.3x changes (#1635) · 58ba4873
      David Yozie 提交于
      * updating adminguide source with most recent 4.3.x work
      
      * updating reference manual with most recent 4.3.x work
      
      * updating utility guide with most recent 4.3.x changes
      
      * updating client tools guide with most recent 4.3.x changes
      
      * adding new file for client tools
      
      * updating map files with most recent 4.3.x changes
      
      * updating map files with most recent 4.3.x changes
      
      * Revert "updating map files with most recent 4.3.x changes"
      
      This reverts commit d7570343c17a126b4d11eaee3870ad6daa36966f.
      
      * Revert "updating map files with most recent 4.3.x changes"
      
      This reverts commit d7570343c17a126b4d11eaee3870ad6daa36966f.
      
      * updating ditamaps with latest 4.3.x changes
      
      * updating ditamaps with latest 4.3.x changes
      58ba4873
    • D
      Remove dead cost model visualization · 842105af
      Daniel Gustafsson 提交于
      While this would've been a neat thing had it been kept up to date,
      it's now over 9 years since it was last touched and it doesn't
      even load in recent versions of Sysquake anymore.
      842105af
    • K
    • K
      Removed limit expression from sort.(closes #1625) · d5463511
      Karthikeyan Jambu Rajaraman 提交于
      Leveraged bound for the limit with mk sort.
      d5463511
    • J
      Async gang recreation should PQconnectPoll on bad fd poll. · 739734fb
      Jimmy Yih 提交于
      There are segment recovery scenarios where revent would be POLLNVAL
      and event as POLLOUT. This would cause an infinite loop until the
      default 10 minute timeout is reached. Because of this, the FTS portion
      at the bottom of the createGang_async() function does not get
      correctly executed. This patch adds checking the fd poll revent for
      POLLERR, POLLHUP, and POLLNVAL to call a PQconnectPoll so that polling
      status PGRES_POLLING_WRITING can correctly update to
      PGRES_POLLING_FAILED. It will then be able to exit the loop and
      execute the FTS stuff.
      739734fb
    • A
      Take upstream lazy_truncate_heap locking behavior · b7506da0
      Abhijit Subramanya 提交于
      During lazy_truncate_heap, an exclusive lock is taken on the heap to be
      truncated. This lock is required to prevent other concurrent transactions
      reading an invalid rd_targblock which is going to be propogated to other
      backends as part of cache invalidation at commit time.
      
      This lock need to be held till the end of commit, hence we remove the
      UnlockRelation() added for GPDB, which is introduced to avoid a deadlock
      situation caused by concurrent vacuums.
      
      However, we cannot repro this deadlock on latest GPDB, where this deadlock was
      found in a very early version of GPDB (back to 3.3).
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      b7506da0
    • A
      Fix logfile_getname() · 00d040aa
      Abhijit Subramanya 提交于
      There are two issues with current logfile_getname()
      - First, it doesn't honor the `gp_log_format` GUC. Originally,
      when the format is CSV, then `.csv` is used, and when format is TEXT,
      the `.log` if used. However, currently the `.csv` is always used regardless
      of the `gp_log_format` settings, hence make the content of the file
      and suffix inconsistent.
      
      - Second, it mistakenly generate logs with wrong suffix `.csv.csv`
      during logfile_rotate(), due to the wrong assumption of the filename
      always containing `.log` when suffix is NULL. Also, due to the calling
      sequence of the logfile_rotate, an extra empty file is generated, e.g.
      in this case, the file with `.csv.csv` is always empty.
      
      Fix in this patch bring back original GPDB behavior. After the fix,
      we generated correct extension, however, still an empty extra log file
      generated during log rotation.
      
      However, a separate refactoring is required to clean up all the API
      changes in all the callers of logfile_getname(), since the parameter
      `suffix` is no longer needed. Also, the calling of logfile_rotate()
      to fix the extra empty log file issue.
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      00d040aa
  3. 25 1月, 2017 7 次提交
    • H
      Fix error position in the errors reported from gp_dump_query_oids() · bc981799
      Heikki Linnakangas 提交于
      If an invalid query is passed to gp_dump_query_oids(), the error position
      would incorrectly point at the location in the original query, containing
      the call to gp_dump_query_oids() call, rather than the query passed as
      argument to it. For example:
      
      regression=# select gp_dump_query_oids('select * from invalid');
      ERROR:  relation "invalid" does not exist
      LINE 1: select gp_dump_query_oids('select * from invalid');
                            ^
      
      To fix, set up error context information correctly before parsing the
      query.
      bc981799
    • N
      Renaming a workload used in bugbuster tests · 8d5ae003
      Nikos Armenatzoglou 提交于
      Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
      8d5ae003
    • N
      Remove StreamBitmap-related dead code · 77ea2a50
      Nikos Armenatzoglou 提交于
      If GPDB evaluates a query that needs to access an index, e.g., btree, then at
      least one Bitmap Index Scan will appear in query's plan.
      MultiExecBitmapIndexScan function will be invoked to construct the bitmap. If a
      query contains a WHERE clause similar to d in (1,2), where a bitmap index has
      been created on column d), then instead of creating a simple bitmap,
      MultiExecBitmapIndexScan will generate a composite bitmap that ORs all the
      bitmaps that satisfy the query condition. In our example,
      MultiExecBitmapIndexScan (in particular bmgetmulti) will OR the bitmaps of d =
      1 and d = 2.
      
      To OR two bitmaps A and B, A should be a StreamBitmap (and not a HashBitmap).
      
      In this commit, we remove code that was executed when MultiExecBitmapIndexScan has
      to generate a composite bitmap of A and B, where A is a StreamBitmap and B is a
      HashBitmap generated when accessing either a btree or a gin or a gist or a hash
      index. It seems that this is not a possible scenario because i) A can be a
      StreamBitmap only if a bitmap index has been constructed on a particular
      column, and ii) to the best of our knowledge we cannot have a single Bitmap
      Index Scan that will access both a Bitmap index and an index of another type
      e.g., btree.
      
      Authors: Nikos Armenatzoglou
               Shreedhar Hardikar <shardikar@pivotal.io>
               Haisheng Yuan <hyuan@pivotal.io>
      77ea2a50
    • A
      Stop ignoring Lazy vacuum from RecentXmin calculation. · 7383c2b0
      Ashwin Agrawal 提交于
      As part of 8.3 merge via this upstream commit
      92c2ecc1, code to ignore lazy vacuum from
      calculating RecentXmin and RecentGlobalXmin was introduced.
      
      In GPDB as part of lazy vacuum, reindex is performed for bitmap indexes, which
      generates tuples in pg_class with lazy vacuum's transaction ID. Ignoring lazy
      vacuum from RecentXmin and RecentGlobalXmin during GetSnapshotData caused
      incorrect setting of hintbits to `HEAP_XMAX_INVALID` for tuple intended to be
      deletd by lazy vacuum and breaking HOT chain. This transaction visibility issue
      was encountered in CI many times with parallel schedule `bitmap_index, analyze`
      failing with error `could not find pg_class tuple for index` at commit time of
      lazy vacuum. Hence this commit stops tracking lazy vacuum in MyProc and
      performing any specific action related to same.
      7383c2b0
    • A
      gpcheckcat query for gp_relation_node to include tablespace. · 3a6fc29e
      Ashwin Agrawal 提交于
      Commit 8fe321af added tablespace OID to
      gp_relation_node to correctly reflect unique relfilenode. As a result need to
      modify the gpcheckcat query by adding tablespace OID validating
      gp_relation_node's correctness with gp_persistent_relation_node.
      3a6fc29e
    • A
      246da8ce
    • D
  4. 24 1月, 2017 22 次提交
    • P
      Shorten the running time of gprecoverseg related test cases · 29f9fd79
      Pengzhou Tang 提交于
      Commit a2c3dd20 unexpectly cause test_fts_transitions_02 running
      for a longer time, so revert related modification from it.
      29f9fd79
    • H
      Move patterns used only by a particular test out of global init_file. · ae5bccd1
      Heikki Linnakangas 提交于
      This reduces the risk of accidentally masking out messages in a test that's
      not supposed to produce such messages in the first place, and is just
      nicer in general, IMHO.
      
      While we're at it, add a brief comment to init_file to explain what it's
      for. Also, remove a few more matchsubs from atmsort.pm that seem to be
      unused.
      ae5bccd1
    • D
      Add belts-and-suspenders check for victum tuple identification · 9b871049
      Daniel Gustafsson 提交于
      Since overflow_tuple() cannot handle a negative value for victim_lp_len,
      ensure that we have found a victim before continuing. Even though this
      should be rare, being extra careful in datapage rewriting seems like a
      good idea.
      9b871049
    • D
      Fix heap page eviction logic · 44ca6ee3
      Daniel Gustafsson 提交于
      In trying to free up space via unused line pointers the test for
      enough room wasn't recalculating the effect of any mem moves. Add
      recalc step before testing.
      44ca6ee3
    • D
      Exclude gpoptutils when checking loadable libraries · 1a11376e
      Daniel Gustafsson 提交于
      The gpoptutils module was removed during the 5.0 development cycle
      and won't be available in the new cluster. Add exception for upgrades
      from 4.3.
      1a11376e
    • H
      Rewrite table redistribution with dropped types in ALTER TABLE · 62d66c06
      Heikki Linnakangas 提交于
      When a table which had an attribute whose type has been dropped process
      the ALTER TABLE command queue, a "hidden" type will be created, and
      immediately dropped, during ALTER TABLE processing for table
      redistribution. This will emit several NOTICEs which can be confusing to
      the user as it's an autogenerated name and the DROP TYPE can have happened
      at a previous time. Below is an example of the output:
      
        create table <tablename> (a integer, b <typename>);
        drop type <typename>;
        ...
        alter table <tablename> set with(reorganize = true) distributed randomly;
        NOTICE:  return type pg_atsdb_<oid>_2_3 is only a shell
        NOTICE:  argument type pg_atsdb_<oid>_2_3 is only a shell
        NOTICE:  drop cascades to function pg_atsdb_<oid>_2_3_out(pg_atsdb_<oid>_2_3)
        NOTICE:  drop cascades to function pg_atsdb_<oid>_2_3_in(cstring)
      
      The reason for adding the hidden types is that the redistribution is
      performed with a CTAS doing SELECT *. To fix, change the way the CTAS is
      done, to not create hidden types.
      
      The temp table that we create still needs to include dropped columns at the
      same positions as the old one. Otherwise, when we swap the relation files,
      a tuple's representation on-disk won't match the catalogs. However, we
      cannot easily re-construct a dropped column with the same attlen, attalign,
      etc. as the original dropped column. Instead, create it as if it was an
      INT4 column, and just before swapping the relation files, update the
      attlen, attalign fields in pg_attribute entries of the dropped columns to
      match that of INT4. That way, the original table's catalog entries match
      that of the temp table.
      
      Alternatively, we could build the temp table without the dropped columns,
      and remove them from pg_attribute altogether. However, we'd need to update
      the attnum field of all following columns, and cascade that change to at
      least pg_attrdef and pg_depend. That seems more complicated.
      
      Also remove output from expected testfiles and perform minor cleanups.
      
      Original patch by Daniel Gustafsson, with the int4-placeholder mechanism
      added by me.
      62d66c06
    • H
      Remove gppkg packaging of PL/perl. · 04017475
      Heikki Linnakangas 提交于
      In previous versions of GPDB, we compiled PL/perl against the Perl version
      that shipped with RHEL5, and built a separate gpkkg package of plperl,
      compiled against the version that ships with RHEL6. If you wanted to use
      PL/perl on RHEL6, you had to install the package separately. That's now
      obsolete, as we don't support RHEL5 anymore. We can just always build
      against the later Perl version.
      
      Adjust concourse script to cope when no gppkg packages were built, as is
      now the case on most platforms.
      04017475
    • D
      Remove perforce leftovers · 0630a33e
      Daniel Gustafsson 提交于
      Perforce was used at Pivotal before Greenplum was open sourced, but
      all uses of which has since been retired. Remove Perforce leftovers
      that are dead code now.
      0630a33e
    • J
      Add TINC walrep test to Concourse pipeline. · 5e112889
      Jimmy Yih 提交于
      5e112889
    • J
      Fix walrep TINC tests · 1e199700
      Jimmy Yih 提交于
      Some walrep TINC tests have been broken for a while now due to all the
      changes going into 5.0_MASTER. This commit brings the tests back to a
      green state so that we can add these tests back to our validation
      pipeline. Most changes are simple like fixing pg_ctl calls to use long
      options, updating xlogdump parsing, or just updating ans files.
      1e199700
    • N
      Rename table in bitmapscan ICG test · 5b9635c9
      Nikos Armenatzoglou 提交于
      5b9635c9
    • C
      Fix gptransfer test using incorrect batch size. · ac365809
      Christopher Hajas 提交于
      * The --batch-size=10 flag was mistakenly added to a scenario that
      * requires ordering and thus a batch size of 1.
      ac365809
    • N
      Ensure that a StreamNode is always owned by only one StreamBitmap · 9dfd0535
      Nikos Armenatzoglou 提交于
      In this commit, we change the ownership of a StreamNode, so that when a
      StreamNode A is added as an input to a StreamNode B, the bitmap that was
      pointing to A abandons the ownership and the pointer is set to NULL. In the
      above example, when GPDB frees BitmapOR, it will not attempt to free OR's
      StreamNode.
      
      In addition, if a StreamNode is an OpStream, then opstream_free function is
      used to free the memory used by the StreamNode, which actually uses pfree. On
      the other hand, if a StreamNode is an IndexStream, then stream_free function is
      invoked, which does not pfree the StreamNode. For consistency, we add function
      indexstream_free to always pfree a StreamNode of IndexStream type.
      Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
      9dfd0535
    • M
      Move backup/restore privileges tinc tests to behave · a84f6c3d
      Marbin Tan 提交于
      a84f6c3d
    • L
      Move openbar tinc test into behave · c9770456
      Larry Hamel 提交于
      -- add behave test for gp_default_storage_options
      -- add PgHba class to represent pg_hba.conf for textual manipulation
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      c9770456
    • C
      Update foreign key json file name to 5.0 (#1605) · ca66868e
      Chris Hajas 提交于
      * Update foreign key json file name to 5.0
      
      * This was breaking gpcheckcat which assumes the json file corresponds
      * to the current greenplum version.
      * Update docs for generating json file.
      
      Authors: Chris Hajas and Jamie McAtamney
      ca66868e
    • A
      Revert "Fix logfile_getname()" · 8e17b053
      Ashwin Agrawal 提交于
      CI red with unit test failure:
      
      ```
      [ RUN         ] test__open_alert_log_file__NonGucOpen
      [          OK ] test__open_alert_log_file__NonGucOpen
      [ RUN         ] test__open_alert_log_file__NonMaster
      [          OK ] test__open_alert_log_file__NonMaster
      [ RUN         ] test__open_alert_log_file__OpenAlertLog
                      "gpperfmon/logs/alert_log.1.log" != "gpperfmon/logs/alert_log.12345"
                      ERROR: Check of parameter filename, function fopen failed
      Expected parameter declared at syslogger_test.c:69
                      ERROR: syslogger_test.c:19 Failure!
      [      FAILED ] test__open_alert_log_file__OpenAlertLog
      [=============] 3 tests ran
      [ PASSED      ] 2 tests
      [ FAILED      ] 1 tests, listed below
      [ FAILED      ] test__open_alert_log_file__OpenAlertLog
      ```
      
      This reverts commit ffb2cf08.
      8e17b053
    • A
      Fix logfile_getname() · ffb2cf08
      Ashwin Agrawal 提交于
      There are two issues with current logfile_getname()
      - First, it doesn't honor the `gp_log_format` GUC. Originally,
      when the format is CSV, then `.csv` is used, and when format is TEXT,
      the `.log` if used. However, currently the `.csv` is always used regardless
      of the `gp_log_format` settings, hence make the content of the file
      and suffix inconsistent.
      
      - Second, it mistakenly generate logs with wrong suffix `.csv.csv`
      during logfile_rotate(), due to the wrong assumption of the filename
      always containing `.log` when suffix is NULL. Also, due to the calling
      sequence of the logfile_rotate, an extra empty file is generated, e.g.
      in this case, the file with `.csv.csv` is always empty.
      
      Fix in this patch bring back original GPDB behavior. After the fix,
      we generated correct extension, however, still an empty extra log file
      generated during log rotation.
      
      However, a separate refactoring is required to clean up all the API
      changes in all the callers of logfile_getname(), since the parameter
      `suffix` is no longer needed. Also, the calling of logfile_rotate()
      to fix the extra empty log file issue.
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      ffb2cf08
    • D
      [#134494357] Added ANYENUM, ANYNONARRAY · c3ad85eb
      Dhanashree Kashid 提交于
      After 8.3 merge, gpdb has new polymorphic types, ANYENUM and ANYNONARRAY.
      
      This fix adds support for ANYENUM and ANYNONARRAY in Translator.
      
      As per postgreSQL, when a function has polymorphic arguments and results;
      in the function call they must have the same actual type.
      For example, a function declared as `f(ANYARRAY) returns ANYENUM`
      will only accept arrays of enum types.
      
      We already have this resolution logic implemented in
      `resolve_polymorphic_argtypes()`.
      
      Refactor the code in `PdrgpmdidResolvePolymorphicTypes()` to use
      `resolve_polymorphic_argtypes()` to deduce the correct data type for
      function argument and return type, based on function call.
      Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
      Signed-off-by: NOmer Arap <oarap@pivotal.io>
      c3ad85eb
    • S
      Set codegen off by default. · 8b42ddd1
      Shreedhar Hardikar 提交于
      8b42ddd1
    • A
      Add cs-storage test-suite to CI. · df075620
      Abhijit Subramanya 提交于
      df075620
    • T
      Add upload to Coverity step and set twice weekly trigger to pipeline [#135626599] · 425e00ee
      Tom Meyer 提交于
      We fixed all the shell checks for the scan_with_coverity script and now
      the cov-build is run in gpAux. We added the following step to tar it and upload
      the tarball to scan.coverity.com.
      
      In the pipeline file we added a twice weekly trigger on Monday and
      Thursday morning.
      Signed-off-by: NJingyi Mei <jmei@pivotal.io>
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      425e00ee