1. 22 6月, 2020 1 次提交
    • (
      Fix flaky appendonly test. · f860ff0c
      (Jerome)Junfeng Yang 提交于
      This fix the error:
      ```
      ---
      /tmp/build/e18b2f02/gpdb_src/src/test/regress/expected/appendonly.out
      2020-06-16 08:30:46.484398384 +0000
      +++ /tmp/build/e18b2f02/gpdb_src/src/test/regress/results/appendonly.out
      2020-06-16 08:30:46.556404454 +0000
      @@ -709,8 +709,8 @@
         SELECT oid FROM pg_class WHERE relname='tenk_ao2'));
             case    | objmod | last_sequence | gp_segment_id
              -----------+--------+---------------+---------------
            + NormalXid |      0 | 1-2900        |             1
              NormalXid |      0 | >= 3300       |             0
            - NormalXid |      0 | >= 3300       |             1
              NormalXid |      0 | >= 3300       |             2
              NormalXid |      1 | zero          |             0
              NormalXid |      1 | zero          |             1
      ```
      
      The flaky is because of the orca `CREATE TABLE` statement without
      `DISTRIBUTED BY` will treat the table as randomly distributed.
      But the planner will treat as distributed by the table's first column.
      
      ORCA:
      ```
      CREATE TABLE tenk_ao2 with(appendonly=true, compresslevel=0,
      blocksize=262144) AS SELECT * FROM tenk_heap;
      NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause. Creating a NULL
      policy entry.
      ```
      
      Planner:
      ```
      CREATE TABLE tenk_ao2 with(appendonly=true, compresslevel=0,
      blocksize=262144) AS SELECT * FROM tenk_heap;
      NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column(s)
      named 'unique1' as the Greenplum Database data distribution key for this
      table.
      ```
      
      So the data distribution for table tenk_ao2 is not as expected.
      Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
      f860ff0c
  2. 20 6月, 2020 1 次提交
    • E
      For Python testing artifacts, introduce combination of Concourse cache and pip --cache-dir. · dcc5abb7
      Ed Espino 提交于
      For the Python testing artifacts used by the CLI tools, utilize the
      Concourse cached directories feature to create and use a pip cache dir
      shared between task runs.
      
      Be aware, the cache is scoped to the worker the task is run on. We do
      not get a cache hit when subsequent builds run on different workers.
      
      * The environment variable PIP_CACHE_DIR is used to store the cache
      directory.
      
      * Add "--retries 10" to Behave test dependency pip install commands.
      dcc5abb7
  3. 19 6月, 2020 4 次提交
    • W
      Fix cursor snapshot dump xid issue · 32a3a4db
      Weinan WANG 提交于
      For cursor snapshot dump, we need to record both distributed and local
      xid. So far, we only record distributed xid in the dump, as well as,
      incorrectly assign distributed xid to local xid by dump read function.
      
      Fix it.
      32a3a4db
    • P
      Re-enable test segwalrep/dtx_recovery_wait_lsn (#10320) · fe26d931
      Paul Guo 提交于
      Enable and refactor test isolation2:segwalrep/dtx_recovery_wait_lsn
      
      The test was disabled in 791f3b01.
      Because there was concern about the change of the line number in
      sql_isolation_testcase.py in the answer file. We refactor the test
      to ease the concern and then enable the test again.
      Co-authored-by: NGang Xiong <gxiong@pivotal.io>
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      fe26d931
    • P
      Avoid generating core files during testing. (#10304) · 4a61357c
      Paul Guo 提交于
      We had some negative tests that need to panic and thus generating core files
      finally if the system is configured with corefile dump. Long ago we did
      optimization to avoid generating core files in some cases. Now we found other
      new scenarios that could be further optimized.
      
      1. avoid core file generation with setrlimit() in the FATAL fault inject cod.
      Some times FATAL is upgraded to PANIC (e.g.  critical section, fail when doing
      QD prepare related work). So we could avoid generating core file for this
      scenario also. Note even if the FATAL is not upgraded, it's fine mostly to
      avoid core file generation since the process will quit soon.  With the code
      change, We avoid two core files from test isolation2:crash_recovery_dtm.
      
      2. We previously had sanity check dbid/segidx in QE:HandleFtsMessage(), and
      panic if there is inconsistency when cassert is enabled, but it seems that we
      really do not need to panic since the root cause of the failure is quite
      straightforward, and the call stack is quite simple: PostgresMain() ->
      HandleFtsMessage(), and also that part of code does not invovle shared memory
      so no need to worry about shared memory mess (else we might want a core file to
      check). Downgrading the log level to FATAL. This avoids 6 core files from test
      isolation2:segwalrep/recoverseg_from_file.
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      4a61357c
    • M
      docs - add views pg_stat_all_tables and indexes (#10248) · 74c5bb7d
      Mel Kiyama 提交于
      * docs - add views pg_stat_all_tables and indexes
      
      pg_stat_all_indexes
      pg_stat_all_tables
      
      Also add some statistics GUCs.
      --track_activities
      --track_counts
      
      * docs - clarify seq_scan and idx_scan refer to the total number of scans from all segments
      
      * docs - minor edits
      74c5bb7d
  4. 18 6月, 2020 6 次提交
    • (
      Fix CASE WHEN IS NOT DISTINCT FROM clause incorrect dump. (#10298) · 3b2aed6e
      (Jerome)Junfeng Yang 提交于
      The clause 'CASE WHEN (arg1) IS NOT DISTINCT FROM (arg2)' dump will miss
      the arg1. For example:
      ```
      CREATE OR REPLACE VIEW xxxtest AS
      SELECT
          CASE
          WHEN 'I will disappear' IS NOT DISTINCT FROM ''::text
          THEN 'A'::text
          ELSE 'B'::text
          END AS t;
      ```
      The dump will lose 'I will disappear'.
      
      ```
      SELECT
          CASE
          WHEN IS NOT DISTINCT FROM ''::text
          THEN 'A'::text
          ELSE 'B'::text
          END AS t;
      ```
      3b2aed6e
    • H
      Fix a flaky test for gdd/dist-deadlock-upsert (#10302) · a3f34ae7
      Hao Wu 提交于
      * Fix a flaky test for gdd/dist-deadlock-upsert
      
      When to run GDD probe is undermined, but it is import for the test
      gdd/dist-deadlock-upsert. If the GDD probe runs immediately after
      the 2 inter-dead-locked transactions, one of the transactions will
      be killed. The isolation2 framework consider the transaction being
      blocked if the transaction doesn't finished in 0.5 second. So, if
      the killed transaction is too early to be aborted, the test framework
      sees no dead lock.
      Analyzed-by: NGang Xiong <gxiong@pivotal.io>
      
      * rm sleep
      a3f34ae7
    • N
      resgroup: fix the cpu value of the per host status view · e0d78729
      Ning Yu 提交于
      Resource group we does not distinguish the per segment cpu usage, the
      cpu usage reported by a segment is actually the total cpu usage of all
      the segments on the host.  This is by design, not a bug.  However, in
      the gp_toolkit.gp_resgroup_status_per_host view it reports the cpu usage
      as the sum of all the segments on the same host, so the reported per
      host cpu usage is actually N times of the actual usage, where N is the
      count of the segments on that host.
      
      Fixed by reporting the avg() instead of the sum().
      
      Tests are not provided as the resgroup/resgroup_views did not verify cpu
      usages since the beginning, because the cpu usage is unstable on
      pipelines.  However, I have verified manually.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      e0d78729
    • J
      Enable brin in ao/aocs table (#9537) · 46d9e26a
      Jinbao Chen 提交于
      We merge the brin from Postgres9.5, but greenplum did not enable
      brin on ao/aocs table.
      
      The reason brin cannot be used directly on the ao / aocs table is
      that the storage structure of ao / aocs is different from the heap
      table. Heap has only one physical file, and all block numbers are
      continuous. The revmap in brin is a array that spans multiple
      blocks, but it does not make sense in ao/aocs table.
      
      Ao/aocs has 128 segment files, and the block numbers in these
      segments are distributed over the entire value range. If we use an
      array to record the information of each block, this array will be
      too large.
      
      So we introduced an upper structure to solve this problem. The
      upper level is a array which records the block number of the
      revmap block. The revmap blocks are not continuous. When we need
      an new revmap block, just extend a new one and record the block
      number in the upper level array.
      Reviewed-by: NAsim R P <pasim@vmware.com>
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Reviewed-by: Nxiong-gang <gxiong@pivotal.io>
      Reviewed-by: NAdam Lee <adam8157@gmail.com>
      46d9e26a
    • M
      docs - update postGIS 2.5.4 docs (#10297) · 39a25f82
      Mel Kiyama 提交于
      * docs - update postGIS 2.5.4 docs
      
      Updates for Greenplum PostGIS 2.5.4 v2
      
      --Add list of PostGIS extensions
      --Add support for PostGIS TIGER geocoder, address standardizer and address rules files.
      --Update install/uninstall instructions to use CREATE EXTENSION command
      --Remove postgis_manager.sh script
      --Remove PostGIS Raster limitation.
      
      * docs updated PostGIS 2.5.4 docs based on review comments.
      
      * docs - removed postgis_raster extension.
      
      * docs - review comment updates  -Added section for installing the PostGIS package
      -Updated section on removing PostGIS package
      -Fix typos.
      
      * docs - updated platform requirements for PostGIS 2.5.4 v2
      -also removed "beta" from GreenplumR
      39a25f82
    • T
      Document steps for installing pygresql for OSS database build · ab1f69ec
      Tyler Ramer 提交于
      PyGreSQL may now be installed via pip or via Ubuntu apt.
      
      Update the travis pipeline as well, using submodules to pull the
      necessary python dependencies. Thus, they are removed from PIP as well.
      Authored-by: NTyler Ramer <tramer@pivotal.io>
      ab1f69ec
  5. 17 6月, 2020 9 次提交
    • D
      Docs - remove blacklist/whitelist terminology · 6b4fc852
      David Yozie 提交于
      6b4fc852
    • H
      Disallow to change the distribution policy to REPLICATED for partition table (#10313) · 78cccb81
      Hao Wu 提交于
      This patch fixes the issue: https://github.com/greenplum-db/gpdb/issues/10224
      Replicated table is not allowed to be a partition table.
      So an existing partition table must not be altered its
      distribution policy to REPLICATED.
      Reported-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
      78cccb81
    • Z
      GetLatestSnapshot on QEs always return without distributed snapshot. · d8f4a45f
      Zhenghua Lyu 提交于
      Greenplum tests the visibility of heap tuples firstly using
      distributed snapshot. Distributed snapshot is generated on
      QD and then dispatched to QEs. Some utility statement needs
      to work under the latest snapshot when executing, so that they
      invoke the function `GetLatestSnapshot` in QEs. But remember
      we cannot get the latest distributed snapshot.
      
      Subtle cases are: Alter Table or Alter Domain statements on QD
      get snapshot in Portal Run and then try to hold locks on the
      target table in ProcessUtilitySlow. Here is the key point:
        1. try to hold lock ==> it might be blocked by other transactions
        2. then it will be waked up to continue
        3. when it can continue, the world has changed because other transactions
           then blocks it has been over
      
      Previously, on QD we do not getsnapshot before we dispatch utility
      statement to QEs which leads to the distributed snapshot does not
      reflect the "world change". This will lead to some bugs. For example,
      if the first transaction is to rewrite the whole heap, and then
      the second Alter Table or Alter Domain statements continues with
      the distributed snapshot that txn1 does not commit yet, it will
      see no tuples in the new heap!
      
      This commit fixes the issue by getting a local snapshot when
      invoking `GetLatestSnapshot` when in QEs.
      
      See Github issue: https://github.com/greenplum-db/gpdb/issues/10216Co-authored-by: NHubert Zhang <hzhang@pivotal.io>
      d8f4a45f
    • T
      Use master of pygresql due to bug in 5.1.2 · cb8d54a6
      Tyler Ramer 提交于
      We encounted a bug in escaping dbname and connection options in pygresql
      5.1.2, which we submitted a patch for here:
      https://github.com/PyGreSQL/PyGreSQL/pull/40
      
      This has been merged, but it will take time to be added to a tagged
      release. For this reason, we have downloaded the source using this
      commit,
      https://github.com/PyGreSQL/PyGreSQL/commit/b1e040e989b5b1b75f42c1103562bfe8f09f93c3
      to install.
      Co-authored-by: NTyler Ramer <tramer@pivotal.io>
      Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
      cb8d54a6
    • T
      Remove dtx_recovery_wait_lsn test · 791f3b01
      Tyler Ramer 提交于
      The test addressed in this commit was added in commit f3df8b18, fails
      for the entirely unrelated reason that, due to a modification of
      sql_isolation_testcase.py, the line numbers are different.
      
      I find this test very fragile for this reason, and for the fact that
      we're relying on an execution failure in isolation2 python code to test
      the database code. This means that any refactoring of isolation2 will
      cause this test to fail - which should not be.
      
      I looked into adding an ignore to the exact lines, but isolation2 wants
      there to be a matched ignore in the input sql file - which makes the
      test useless, because we're looking for some exact exception from
      isolation2 from a valid sql input. Isolation2 doesn't give us the
      framework to ignore just some messages on the output side. Using a
      isolation2 init modification still just ignores the actual problem, but
      in a different file.
      
      This fix should just be considered a tempory work to get the pipeline
      green while a better solution is determined later.
      Authored-by: NTyler Ramer <tramer@pivotal.io>
      791f3b01
    • T
      Update isolation2 expected output considering changes in pg · 1131c5a9
      Tyler Ramer 提交于
      The update to pygresql pg connection allows the output of sql isolation2
      testing to be more similar to psql. Thus, we are reverting some of the
      changes made in commits 20b3aa3a to instead be more inline with the
      usual psql output. Notably, trailing zeroes on floats are trimmed.
      Co-authored-by: NTyler Ramer <tramer@pivotal.io>
      Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
      1131c5a9
    • T
      Close short lived connections · bc35b6b2
      Tyler Ramer 提交于
      Due to refactor of dbconn and newer versions of pygresql, using
      `with dbconn.connect() as conn` no longer attempts to close a
      connection, even if it did prior. Instead, this syntax uses the
      connection itself as context and, as noted in execSQL, overrides the
      autocommit functionality of execSQL.
      
      Therefore, close the connection manually to ensure that execSQL is
      auto-commited, and the connection is closed.
      Co-authored-by: NTyler Ramer <tramer@pivotal.io>
      Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
      bc35b6b2
    • T
      Refactor dbconn · 330db230
      Tyler Ramer 提交于
      One reason pygresql was previously modified was that it did not handle
      closing a connection very gracefully. In the process of updating
      pygresql, we've wrapped the connection it provides with a
      ClosingConnection function, which should handle gracefully closing the
      connection when the "with dbconn.connect as conn" syntax is used.
      
      This did, however, illustrate issues where a cursor might have been
      created as the result of a dbconn.execSQL() call, which seems to hold
      the connection open if not specifically closed.
      
      It is therefore necessary to remove the ability to get a cursor from
      dbconn.execSQL(). To highlight this difference, and to ensure that
      future calls to this library is easy to use, I've cleaned up and
      clarified the dbconn execution code, to include the following features.
      
      - dbconn.execSQL() closes the cursor as part of the function. It returns
        no rows
      - functions dbconn.query() is added, which behaves like dbconn.execSQL()
        except that it now returns a cursor
      - function dbconn.execQueryforSingleton() is renamed
        dconn.querySingleton()
      - function dbconn.execQueryforSingletonRow() is renamed
        dconn.queryRow()
      Authored-by: NTyler Ramer <tramer@pivotal.io>
      330db230
    • T
      Update PyGreSQL from 4.0.0 to 5.1.2 · f5758021
      Tyler Ramer 提交于
      This commit updates pygresql from 4.0.0 to 5.1.2, which requires
      numerous changes to take advantages of the major result syntax change
      that pygresql5 implemented. Of note, cursors or query objects
      automatically cast returned values as appropriate python types - list of
      ints, for example, instead of a string like "{1,2}". This is the bulk of
      the changes.
      
      Updating to pygresql 5.1.2 provides numerous benfits, including the
      following:
      
      - CVE-2018-1058 was addressed in pygresql 5.1.1
      
      - We can save notices in the pgdb module, rather than relying on importing
      the pg module, thanks to the new "set_notices()"
      
      - pygresql 5 supports python3
      
      - Thanks to a change in the cursor, using a "with" syntax guarentees a
        "commit" on the close of the with block.
      
      This commit is a starting point for additional changes, including
      refactoring the dbconn module.
      
      Additionally, since isolation2 uses pygresql, some pl/python scripts
      were updated, and isolation2 SQL output is further decoupled from
      pygresql. The output of a psql command should be similar enough to
      isolation2's pg output that minimal or no modification is needed to
      ensure gpdiff can recognize the output.
      Co-Authored-by: NTyler Ramer <tramer@pivotal.io>
      Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
      f5758021
  6. 16 6月, 2020 5 次提交
    • J
      Properly mark null return from combine functions · 736898ad
      Jesse Zhang 提交于
      We had a bug in a few of the combine functions where if the combine
      function returned a NULL, it didn't set fcinfo->isnull = true. This led
      to a segfault when we would spill in the final hashagg of a two-stage
      agg inside the serial function. So, properly mark NULL outputs from the
      combine functions.
      Co-authored-by: NDenis Smirnov <sd@arenadata.io>
      Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
      736898ad
    • J
      Fix double deduction of FREEABLE_BATCHFILE_METADATA · 66a0cb4d
      Jesse Zhang 提交于
      Earlier, we always deducted FREEABLE_BATCHFILE_METADATA inside
      closeSpillFile() regardless of whether the spill file was already
      suspended. This deduction, is already performed inside
      suspendSpillFiles(). This double accounting leads to
      hashtable->mem_for_metadata becoming negative and we get:
      
      FailedAssertion("!(hashtable->mem_for_metadata > 0)", File: "execHHashagg.c", Line: 2141)
      Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
      66a0cb4d
    • J
      Fix assert condition in spill_hash_table() · 067bb350
      Jesse Zhang 提交于
      This commit fixes the following assertion failure message reported in:
      (#9902) https://github.com/greenplum-db/gpdb/issues/9902
      
      FailedAssertion("!(hashtable->nbuckets > spill_set->num_spill_files)", File: "execHHashagg.c", Line: 1355)
      
      hashtable->nbuckets can actually end up being equal to
      spill_set->num_spill_files, which causes the failure. This is because:
      
      hashtable->nbuckets is set with HashAggTableSizes->nbuckets, which can
      end up being equal to: gp_hashagg_default_nbatches. Refer:
      nbuckets = Max(nbuckets, gp_hashagg_default_nbatches);
      
      Also, spill_set->num_spill_files is set with
      HashAggTableSizes->nbatches, which is further set to
      gp_hashagg_default_nbatches.
      
      Thus, these two entities can be equal.
      Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
      067bb350
    • (
      Increase retry count for pg_rewind tests' replication promotion and streaming. (#10292) · a3d8302a
      (Jerome)Junfeng Yang 提交于
      Increase the retry count to prevent test failed. Most of the time, the
      failure is because slow processing.
      a3d8302a
    • C
      Fix ICW test if GPDB compiled without ORCA · 9aa2b26c
      Chris Hajas 提交于
      We need to ignore the output when enabling/disabling an Orca xform, as
      if the server is not compiled with Orca there will be a diff (and we
      don't really care about this output).
      
      Additionally, clean up unnecessaary/excessive setting of GUCs
      
      Some of these gucs were on by default or only intended for a specific
      test. Explicitly setting them caused them to appear at the end of
      `explain verbose` plans, making the expected output more difficult to
      match with if the server was built with/without Orca.
      9aa2b26c
  7. 15 6月, 2020 4 次提交
    • P
      Retry more for replication synchronization waiting to avoid isolation2 test flakiness. (#10281) · ca360700
      Paul Guo 提交于
      Some test cases have been failing due to too few retries. Let's increase them and also
      create some common UDF for use.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      ca360700
    • P
      Fix flakiness of "select 1" output after master reset due to injected panic... · 02ad1fc4
      Paul Guo 提交于
      Fix flakiness of "select 1" output after master reset due to injected panic fault before_read_command (#10275)
      
      Several tests inject panic in before_read_command to trigger master reset.
      Previous we run "select 1" after the fault inject query to verify, but the
      output is not deterministic sometimes. i.e. sometimes we do not see the line
      
      PANIC:  fault triggered, fault name:'before_read_command' fault type:'panic'
      
      This was actually observed in test crash_recovery_redundant_dtx per commit
      message and test comment.  It ignores the output of "select 1", but probably
      we still want the output to verify the fault is encountered.
      
      It's still mysterious why sometimes the PANIC message is missing. I spent some
      time on digging but reckon that I can not root cause in short time. One guess
      is that the PANIC message was although sent to the frontend in errfinish() but
      the kernel buffer-ed data was dropped after abort() due to ereport(PANIC);
      Another guess is something wrong related to libpq protocol (not saying it's a
      libpq bug).  In any case, it does not deserve much time to work on the tests
      only, so simply mask the PANIC message to make the test result deterministic
      and also not affect the test purpose.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      02ad1fc4
    • X
      Move to a resource group with memory_limit 0 · 37a19376
      xiong-gang 提交于
      When move a query to a resource group whose memory_limit is 0, the available
      memory is the current available global shared memory.
      37a19376
    • X
      Fix a recursive AbortTransaction issue · b5c4fdc0
      xiong-gang 提交于
      When the error happens after ProcArrayEndTransaction, it will recurse back to
      AbortTransaction, we need to make sure it will not generate extra WAL record
      and not fail the assertions.
      b5c4fdc0
  8. 13 6月, 2020 2 次提交
  9. 12 6月, 2020 2 次提交
    • (
      Create external table fdw extension under gpcontrib. (#10187) · d86f32e5
      (Jerome)Junfeng Yang 提交于
      Remove pg_exttable.h since the catalog is no longer exist anymore.
      Move function declaration in pg_exttable.h into external.h.
      Extract related code into external.c which maintains all codes that
      can not be moved into an external table fdw extension.
      
      Also, move the external table orca interface into external.c as a workaround.
      Maybe provide orca fdw routine in the future.
      
      Extract the external table's execution logic into external table fdw
      extension.
      
      Create the gp_exttable_fdw extension during gpinitsystem to allow
      creating system external tables.
      d86f32e5
    • D
      a4e230b2
  10. 11 6月, 2020 5 次提交
    • H
      Revert "Fix flaky test exttab1" · f538f4b6
      Hubert Zhang 提交于
      This reverts commit 026e4595.
      This commit break pxf test case. We need to handle it firstly.
      f538f4b6
    • H
      Fix flaky test terminate_in_gang_creation · 63b5adf9
      Hubert Zhang 提交于
      The test case restarts all primaries and expects the old session
      would fail for the next query since gangs are cached.
      But the restart may last more than 18s which is the max idle
      time QEs could exist. In this case, the new query in the old
      session will just fetch a new gang without expected errors.
      Just set gp_vmem_idle_resource_timeout to 0 to fix this flaky test.
      Reviewed-by: NPaul Guo <pguo@pivotal.io>
      63b5adf9
    • H
      Fix flaky test exttab1 · 026e4595
      Hubert Zhang 提交于
      The flaky case happens when select an external table with option
      "fill missing fields". By gdb the qe, this value is not false
      on QE sometimes. When ProcessCopyOptions, we use intVal(defel->arg)
      to parse the boolean value, which is not correct. Using defGetBoolean
      to replace it.
      026e4595
    • J
      Add a new line feed and fix a bad file name · f281ac17
      J·Y 提交于
      f281ac17
    • L
      docs - graph analytics new page (#10138) · 6d7b949c
      Lena Hunter 提交于
      * clarifying pg_upgrade note
      
      * graph edits
      
      * graph analytics updates
      
      * menu edits and code spacing
      
      * graph further edits
      
      * insert links for modules
      6d7b949c
  11. 10 6月, 2020 1 次提交