1. 11 7月, 2017 2 次提交
  2. 10 7月, 2017 2 次提交
  3. 07 7月, 2017 8 次提交
    • A
      Use PGXS on concourse without compiled GPDB · e87dfaec
      Adam Lee 提交于
      e87dfaec
    • A
      53d84389
    • N
      Resgroup catalog changes · 4fafebe2
      Ning Yu 提交于
      Change initial contents in pg_resgroupcapability:
      * Remove memory_redzone_limit;
      * Add memory_shared_quota, memory_spill_ratio;
      
      Change resgroup concurrency range to [1, 'max_connections']:
      * Original range is [0, 'max_connections'], and -1 means unlimited.
      * Now the range is [1, 'max_connections'], and -1 is not supported.
      
      Change resgroup limit type from float to int.
      
      Changed below resgroup resource limit types from float to int percentage value:
      * cpu_rate_limit;
      * memory_limit;
      * memory_shared_quota;
      * memory_spill_ratio;
      4fafebe2
    • A
      Add distributed xact info to CommitPrepared xlog record and redo. · a9df23a1
      Ashwin Agrawal 提交于
      Currently, CommitPrepared xlog record, `xl_xact_commit_prepared` doesn't store
      information for distributed transaction like distributed transaction id or
      distributed timestamp. Its extremely helpful to have this information recorded
      plus also needed to replay/redo the xlog record.
      
      Currently, redo of CommitPrepared xlog record is not updating the distributed
      commit log. Which as of now seems not possing any issues as recovery of primary
      or failover to mirror will disconnect all existing connections but for
      consistency its better to redo distributedlog commit as well during redo of
      CommitPrepared.
      a9df23a1
    • A
      Remove unused variable in checkpoint record. · f737c2d2
      Ashwin Agrawal 提交于
      segmentCount variable is unused in TMGXACT_CHECKPOINT structure hence loose it
      out. Also, removing the union in fspc_agg_state, tspc_agg_state and
      dbdir_agg_state structures as don't see reason for having the same.
      f737c2d2
    • D
      Use bytea datatype on the AO blockdir minipage attribute · 056c7530
      Daniel Gustafsson 提交于
      The minipage attribute was using varbit without actually storing a
      varbit datum in it. Change over to bytea since it makes reading the
      value back easier, especially for pg_upgrade. This complements the
      change in commit dce769fe which performed the same change
      for the visimap attribute of the AO visimap relation.
      
      Since the bitmap hack function is created in the pg_temp schema,
      exempt it from Oid synchronization during binary upgrades to allow
      creation. This fix applies to the visimap handling as well.
      056c7530
    • O
      Bump ORCA version to 2.35.1 · 42a17ef2
      Omer Arap 提交于
      42a17ef2
    • L
      DOCS: hide pgAdmin III topics (#2722) · 85a8c3d2
      Lisa Owen 提交于
      85a8c3d2
  4. 06 7月, 2017 7 次提交
  5. 04 7月, 2017 1 次提交
  6. 03 7月, 2017 3 次提交
  7. 01 7月, 2017 7 次提交
    • M
      behave: Fix gpperfmon behave test intermitent failure for log alert history · 63ad5094
      Marbin Tan 提交于
      There are times where gpperfmon_log_alert_history scenario fails,
      because there's no data in the log alert history table.
      This might be due to us copying an empty csv file; gpperfmon writes to a
      file for log alert in a cadence. We might be copying a file that has not
      been written into yet -- possibly empty.
      
      Make sure that we have something to copy first before proceeding to the
      next step.
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      63ad5094
    • M
      gpperfmon: refactor to make 'skew' calculation more readable · 6d01138b
      Marbin Tan 提交于
      Coefficient of Variation Calculation
      
      Coefficient of variation is the standard deviation divided by the mean.
      
      We're using the term skew very loosely in our description, as we're
      actually calculating coefficient of variation.
      
      With coefficient of variation, we can tell how dispersed the data points are across
      the segments. The higher the coefficient of variation, the more
      non-uniform the distribution of the data in cluster.
      
      Coefficient of variation is unitless so it could be used for comparing
      different clusters and how they are performing relative to each other.
      
      CPU Skew calculation:
      mean(cpu) = sum(per segment cpu cycle) / sum(segments)
      variance(cpu) = sqrt( sum(((cpu(segment) - mean(cpu))^2) ... ) / sum(segments) )
      std_dev(cpu) = sqrt(variance(cpu))
      skew(cpu) = coeffecient of variation = std_dev(cpu)/mean(cpu)
      
      Row out Skew calculation:
      mean(row) = sum(per segment row) / sum(segments)
      variance(row) = sqrt( sum(((row(segment) - mean(row))^2) ... ) / sum(segments) )
      std_dev(row) = sqrt(variance(row))
      skew(row) = coeffecient of variation = std_dev(row)/mean(row)
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      6d01138b
    • T
      gpperfmon: consolidate cpu skew and row skew behave test · e49e0fb7
      Tushar Dadlani 提交于
      Doing any kind of intensive work should show up in gpperfmon for both
      cpu skew and row skew, so putting them together as they can be tested at
      the same time.
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      e49e0fb7
    • M
      gpperfmon: Fix incorrect data saved into queries_history table · 12cca294
      Marbin Tan 提交于
      sigar_proc_cpu_get is the function which takes a pid
      and goes to the cpu and gets the cpu_elapsed value for that pid at a
      given moment.
      
      get_pid_metrics saves the information into a hash table within the
      gpsmon process. That pid only gets wiped out of the hashtable when
      gpmmon requests a dump command to gpsmon.
      
      gpsmon process sends tcp packets with information about cpu_elapsed
      (amount of cpu cycles spent on a given
      slice/process). Doing a 'dump' clears out the gpsmon hashtables.
      
      This could lead to scenarios where a query has ended,
      the process has died, and the sigar_proc_cpu_get function can't find the pid
      so it puts 0 into the struct which is updating the hashtable so the
      hashtable has a 0 for cpu_elapsed and then this is what is in
      queries_tail and so queries_history ends up having a 0 for cpu_elapsed
      since it was the last entry in queries_tail
      
      Fix: Ensure that we validate the functions that request pid metrics
      from libsigar and log the issues if they occur.
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      12cca294
    • J
      Minor syntax change to statistics restore · ab06db22
      Jamie McAtamney 提交于
      ab06db22
    • K
      Fix error with statistics restore when statistics already exist · e9a86809
      Karen Huddleston 提交于
      Previously, if pg_statistic already contains statistics for
      a given table column, attempting to insert those statistics
      again during the restore would give a primary key error and
      statistics would not be correctly restored.
      
      Now, existing statistics are deleted just before inserting the
      restore statistics, so there is no collision.
      Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
      e9a86809
    • D
      DOCS: Editing text that pins NetBackup to specific version · 501dd328
      dyozie 提交于
      501dd328
  8. 30 6月, 2017 9 次提交
  9. 29 6月, 2017 1 次提交