1. 03 7月, 2017 2 次提交
  2. 01 7月, 2017 7 次提交
    • M
      behave: Fix gpperfmon behave test intermitent failure for log alert history · 63ad5094
      Marbin Tan 提交于
      There are times where gpperfmon_log_alert_history scenario fails,
      because there's no data in the log alert history table.
      This might be due to us copying an empty csv file; gpperfmon writes to a
      file for log alert in a cadence. We might be copying a file that has not
      been written into yet -- possibly empty.
      
      Make sure that we have something to copy first before proceeding to the
      next step.
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      63ad5094
    • M
      gpperfmon: refactor to make 'skew' calculation more readable · 6d01138b
      Marbin Tan 提交于
      Coefficient of Variation Calculation
      
      Coefficient of variation is the standard deviation divided by the mean.
      
      We're using the term skew very loosely in our description, as we're
      actually calculating coefficient of variation.
      
      With coefficient of variation, we can tell how dispersed the data points are across
      the segments. The higher the coefficient of variation, the more
      non-uniform the distribution of the data in cluster.
      
      Coefficient of variation is unitless so it could be used for comparing
      different clusters and how they are performing relative to each other.
      
      CPU Skew calculation:
      mean(cpu) = sum(per segment cpu cycle) / sum(segments)
      variance(cpu) = sqrt( sum(((cpu(segment) - mean(cpu))^2) ... ) / sum(segments) )
      std_dev(cpu) = sqrt(variance(cpu))
      skew(cpu) = coeffecient of variation = std_dev(cpu)/mean(cpu)
      
      Row out Skew calculation:
      mean(row) = sum(per segment row) / sum(segments)
      variance(row) = sqrt( sum(((row(segment) - mean(row))^2) ... ) / sum(segments) )
      std_dev(row) = sqrt(variance(row))
      skew(row) = coeffecient of variation = std_dev(row)/mean(row)
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      6d01138b
    • T
      gpperfmon: consolidate cpu skew and row skew behave test · e49e0fb7
      Tushar Dadlani 提交于
      Doing any kind of intensive work should show up in gpperfmon for both
      cpu skew and row skew, so putting them together as they can be tested at
      the same time.
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      e49e0fb7
    • M
      gpperfmon: Fix incorrect data saved into queries_history table · 12cca294
      Marbin Tan 提交于
      sigar_proc_cpu_get is the function which takes a pid
      and goes to the cpu and gets the cpu_elapsed value for that pid at a
      given moment.
      
      get_pid_metrics saves the information into a hash table within the
      gpsmon process. That pid only gets wiped out of the hashtable when
      gpmmon requests a dump command to gpsmon.
      
      gpsmon process sends tcp packets with information about cpu_elapsed
      (amount of cpu cycles spent on a given
      slice/process). Doing a 'dump' clears out the gpsmon hashtables.
      
      This could lead to scenarios where a query has ended,
      the process has died, and the sigar_proc_cpu_get function can't find the pid
      so it puts 0 into the struct which is updating the hashtable so the
      hashtable has a 0 for cpu_elapsed and then this is what is in
      queries_tail and so queries_history ends up having a 0 for cpu_elapsed
      since it was the last entry in queries_tail
      
      Fix: Ensure that we validate the functions that request pid metrics
      from libsigar and log the issues if they occur.
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      12cca294
    • J
      Minor syntax change to statistics restore · ab06db22
      Jamie McAtamney 提交于
      ab06db22
    • K
      Fix error with statistics restore when statistics already exist · e9a86809
      Karen Huddleston 提交于
      Previously, if pg_statistic already contains statistics for
      a given table column, attempting to insert those statistics
      again during the restore would give a primary key error and
      statistics would not be correctly restored.
      
      Now, existing statistics are deleted just before inserting the
      restore statistics, so there is no collision.
      Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
      e9a86809
    • D
      DOCS: Editing text that pins NetBackup to specific version · 501dd328
      dyozie 提交于
      501dd328
  3. 30 6月, 2017 9 次提交
  4. 29 6月, 2017 22 次提交
    • H
      a8ac8c99
    • D
      Add test script for running pg_upgrade in ICW · a100814c
      Daniel Gustafsson 提交于
      This adds a testrunner to pg_upgrade intended to run at the end of
      ICW. The running gpdemo cluster is converted from the current GPDB
      version to the same version, which shall result in an identical
      cluster. The script first dumps the ICW cluster, the upgrades into
      a new gpdemo cluster and diffs the dump from that with the original
      dump. In case the cluster needs to be tweaked before the test, a
      _pre.sql file can be supplied which will be executed against the
      old cluster before dumping the schema of it. This file currently
      drops the relations which hold constraints not yet supported by
      pg_upgrade.
      
      An optional quicktest that the Oid synchronization is maintained
      for new objects is supported in a smoketest mode.
      
      The new cluster is brought up with fsync turned off to speed up
      the test.
      
      This is inspired by the upstream test runner for pg_upgrade.
      a100814c
    • H
      Change the way OIDs are preserved during pg_upgrade. · f51f2f57
      Heikki Linnakangas 提交于
      Instead of meticulously recording the OIDs of each object in the pg_dump
      output, dump and load all OIDs as a separate steps in pg_upgrade.
      
      We now only preserve OIDs of types, relations and schemas from the old
      cluster. Other objects are assigned new OIDs as part of the restore.
      To ensure the OIDs are consistent between the QD and QEs, we dump the
      (new) OIDs of all objects to a file, after upgrading the QD node, and use
      those OIDs when restoring the QE nodes. We were already using a similar
      mechanism for new array types, but we now do that for all objects.
      f51f2f57
    • D
      Check for non-covering indexes in partitioned AO tables · 3a569aba
      Daniel Gustafsson 提交于
      If a partitioned append-only table had an index created on the parent
      table, and sen subsequently a table without any indexes at all was
      exchanged into the hierarchy, then pg_upgrade will fail on AO blockdir
      synchronization. The DDL from pg_dump will recreate the index over the
      partitioned table, including the partition which before didn't have an
      index, and that will cause pg_upgrade to look for a preassigned Oid
      which doesn't exist. Check for this and abort the upgrade in case we
      find an offending relation.
      3a569aba
    • D
      Use actual relnames from catalog for AO aux tables · fe03e750
      Daniel Gustafsson 提交于
      When querying the AO{CS} auxiliary relations, extrat the actual
      relnames from the catalog rather than assuming the names. Since
      we need to query the catalog for the bldkir relation anyways we
      might as well get all the aux tables in the query.
      fe03e750
    • D
      Relax relation matching logic to handle toast tables · 0713d4b7
      Daniel Gustafsson 提交于
      The relation matching logic durin upgrades was very strict, which
      caused issues when for example a relation had a toast attribute
      which was subsequently dropped. In the new cluster there will be
      no toast table for this table. Also don't treat the existence of
      new toast tables as a fatal error since newer versions are free
      to create toasts where previous versions didn't.
      
      This is a partial backport of upstream commit 73b9952e.
      0713d4b7
    • D
      Allow upgrades not recursing to children in SET DEFAULT · 70770714
      Daniel Gustafsson 提交于
      The constraints on children are handled in the dump and will be set
      on each individual child table manually, so allow to not recurse in
      binary upgrade mode.
      70770714
    • D
      Fix inheritance dumping during binary upgrade · bc580505
      Daniel Gustafsson 提交于
      Commit 13216bfd backported fixes for dumping dropped attributes,
      but missed to block out the conislocal handling which we won't get
      until we merge 8.4. Properly block it out for now with a MERGE
      marker and implement dumping of inherited constraints in a way that
      works for 8.3 based Greenplum.
      bc580505
    • D
      Add override for pg_resetxlog safety question · f8c13bb4
      Daniel Gustafsson 提交于
      Handling the override flag question when runninf pg_resetxlog
      programmatically is cumbersome for no reason. Add an override
      argument (undocumented) to make the code less complicated. The
      question could be circumvented by piping "y" so passing "-y"
      is equal in terms of manual intervention required.
      f8c13bb4
    • D
      Handle Greenplum 5.0 in heap page conversion · b1d1588b
      Daniel Gustafsson 提交于
      The heap page conversion is only applicable in upgrades from 4.3
      to 5.0. Ensure that we aren't already on 5.0 when figuring out if
      to convert. Also initialize the flag to false for extra safety.
      Unless the queries find that we need to convert the underlying heap
      pages we really shouldn't attempt it.
      b1d1588b
    • D
      Support parallel installations in gpdemo · a1e60572
      Daniel Gustafsson 提交于
      Allow to create a gpdemo cluster in a specified directory by over-
      riding the master data directory. This is needed for pg_upgrade
      testing where we need two individual gpdemo clusters at the same
      time.
      a1e60572
    • H
      Allow zero-label enums. · ad147b6e
      Heikki Linnakangas 提交于
      Backport this commit from upstream, needed for binary upgrade:
      
      commit 1fd9883f
      Author: Bruce Momjian <bruce@momjian.us>
      Date:   Sat Dec 26 16:55:21 2009 +0000
      
          Zero-label enums:
      
          Allow enums to be created with zero labels, for use during binary upgrade.
      ad147b6e
    • H
      Fix misunderstanding of pg_extension.extnamespace field. · 6cb8101a
      Heikki Linnakangas 提交于
      Extensions do not live in schemas. pg_extension.extnamespace is *not* the
      schema that the extension belongs to, unlike most "*namespace" fields in
      catalog tables.
      6cb8101a
    • D
      Fix incorrect struct member in backport · 51bd8009
      Daniel Gustafsson 提交于
      The backport of the data checksum catalog changes backported the
      relevant GUC from a version which has struct config_bool defined
      differently than GPDB. The reason an extra NULL in the config_bool
      array initialization wasn't causing a compilation failure is that
      there is an extra bool member at the end which is only set during
      runtime, reset_val. The extra NULL was "overflowing" into this
      member and thus only raised a warning under -Wint-conversion:
      
          guc.c:1180:15: warning: incompatible pointer to integer
      	               conversion initializing 'bool' (aka 'char')
      				   with an expression of type 'void *’
      
      Fix by removing the superflous NULL. Since it was setting reset_val
      to NULL (and for a GUC which is yet to "do something") there should
      be no effects by this.
      51bd8009
    • N
      Implement resgroup memory limit (#2669) · b5e1fb0a
      Ning Yu 提交于
      Implement resgroup memory limit.
      
      In a resgroup we divide the memory into several slots, the number
      depends on the concurrency setting in the resgroup. Each slot has a
      reserved quota of memory, all the slots also share some shared memory
      which can be acquired preemptively.
      
      Some GUCs and resgroup options are defined to adjust the exact allocation
      policy:
      
      resgroup options:
      - memory_shared_quota
      - memory_spill_ratio
      
      GUCs:
      - gp_resource_group_memory_limit
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      b5e1fb0a
    • A
      Added cdbvars.h to be installed. (#2710) · 53e82866
      Alex Diachenko 提交于
      Added cdbvars.h to be installed so extensions can import it.
      53e82866
    • M
      Revert "gpperfmon: refactor to make 'skew' calculation more readable" · e020cc1e
      Marbin Tan 提交于
      This code is not correctly recording the data. There seems to be a bug
      with how we're modifying the values. Most likely, that we're trying to
      change the address instead of the actual value it's pointing to.
      
      We will need to fully run the test this part of the code again and create a new PR.
      
      This reverts commit 411d3c82.
      e020cc1e
    • J
      Add tool to assist in managing WAL replication mirror segments · 43492c38
      Jimmy Yih 提交于
      This gpsegwalrep tool is meant for assisting in segment WAL
      replication development which is why it is not placed in
      gpMgmt/bin/. The tool is used to initialize, start, stop, and destroy
      WAL replication mirror segments. It can also be said that this tool is
      a rough example of what a segment walrep tool would look like in
      Greenplum 6.x.
      43492c38
    • J
      Make gpdemo able to bring up a mirrorless cluster · 89804845
      Jimmy Yih 提交于
      There are times where a developer may want to bring up a Greenplum
      demo cluster without mirrors. This change should make this possible
      now.
      
      Example:
      WITH_MIRRORS=false make create-demo-cluster
      89804845
    • D
    • M
      gpperfmon: address compiler warning · 153867ad
      Marbin Tan 提交于
      compiler is complaining about prototype not being there, even if it's
      already there. It seems that adding a void into the prototype silences this
      compiler warning.
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      153867ad
    • T
      gpperfmon: refactor quantum name to be more explicit · 607c594a
      Tushar Dadlani 提交于
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      607c594a