1. 03 1月, 2020 2 次提交
    • A
      Fix error scenarios of alter_db_set_tablespace test · 7788f434
      Ashwin Agrawal 提交于
      alter_db_set_tablespace test has scenarios to inject error fault for
      content 0. Then run ALTER DATABASE SET TABLESPACE command. Once error
      is hit on content 0, the transaction is aborted. Based on when the
      transaction gets aborted, its unpredictable what point the command has
      reached for non-content 0 primaries. If non-content 0 primaries, have
      reached the point of directory copy, then only abort record for them
      will have database directory deletion record to be replayed on mirror,
      else not. The test was waiting for directory deletion fault to be
      triggered for all the content mirrors. This expectation is incorrect
      and makes test flaky based on timing.
      
      Hence, modifying the test for error scenarios to only wait for
      directory deletion for content 0. Then wait for all the mirrors to
      replay all the currently generated wal records, post which make sure
      destination directory is empty. This should eliminate the flakiness
      from the test.
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      7788f434
    • A
      GpDirsExist check for directories only · fe8fd394
      Ashwin Agrawal 提交于
      gpdeletesystem uses GpDirsExist() to check if dump directories are
      present to warn and avoid deleting the cluster. Only if "-f" option is
      used allowed to delete the cluster with dump directories
      present. Though this function incorrectly checks for files and
      directories with name "*dump*" and not just directories.
      
      So, gpdeletesystem started failing after commit eb036ac1. FTS writes
      file with name of file as `gpsegconfig_dump`. GpDirsExist()
      incorrectly reports this as backup directory present and fails. Fix
      the same by only checking for directories and not files.
      
      Fixes https://github.com/greenplum-db/gpdb/issues/8442Reviewed-by: NAsim R P <apraveen@pivotal.io>
      fe8fd394
  2. 02 1月, 2020 1 次提交
    • (
      Fix error code overwrite in cdbdisp_dumpDispatchResult. (#9323) (#9340) · 7e8e25fd
      (Jerome)Junfeng Yang 提交于
      The error code should not set twice with different codes in `errfinish_and_return`.
      Since the order of function's parameters is dependent on the compiler.
      If the final error code is ERRCODE_INTERNAL_ERROR, file name and line number print out.
      ```
      ERROR:  Error on receive from SEG IP:PORT pid=PID: *** (cdbdispatchresult.c:487)
      ```
      
      It's strange to print out the file name and line number here.
      So, remove ERRCODE_INTERNAL_ERROR, and only keep ERRCODE_GP_INTERCONNECTION_ERROR,
      which is also the error code before commit: 143bb7c6.
      Reviewed-by: NPaul Guo <paulguo@gmail.com>
      (cherry picked from commit 55d6415b)
      7e8e25fd
  3. 01 1月, 2020 1 次提交
    • A
      Fix flakiness of alter_db_set_tablespace test · 8d1460fc
      Ashwin Agrawal 提交于
      Test should make sure mirror has processed the drop database wal
      record before proceeding to perform check for destination tablespace
      directory non-existence. It skipped performing the wait for content 0,
      incase of panic after writing wal record, that's incorrect.
      
      Adding to wait for all the mirror's to process the wal record and then
      only perform the validation. This should fix the failures seen in CI
      with below diff
      
      ```
      --- /tmp/build/e18b2f02/gpdb_src/src/test/regress/expected/alter_db_set_tablespace.out    2019-10-14 16:09:43.638372174 +0000
      +++ /tmp/build/e18b2f02/gpdb_src/src/test/regress/results/alter_db_set_tablespace.out    2019-10-14 16:09:43.714379108 +0000
      @@ -1262,25 +1271,352 @@
       CONTEXT:  PL/Python function "stat_db_objects"
       NOTICE:  dboid dir for database alter_db does not exist on dbid = 4
       CONTEXT:  PL/Python function "stat_db_objects"
      -NOTICE:  dboid dir for database alter_db does not exist on dbid = 5
      -CONTEXT:  PL/Python function "stat_db_objects"
       NOTICE:  dboid dir for database alter_db does not exist on dbid = 6
       CONTEXT:  PL/Python function "stat_db_objects"
       NOTICE:  dboid dir for database alter_db does not exist on dbid = 7
       CONTEXT:  PL/Python function "stat_db_objects"
       NOTICE:  dboid dir for database alter_db does not exist on dbid = 8
       CONTEXT:  PL/Python function "stat_db_objects"
      - dbid | relfilenode_dboid_relative_path | size
      -------+---------------------------------+------
      -    1 |                                 |
      -    2 |                                 |
      -    3 |                                 |
      -    4 |                                 |
      -    5 |                                 |
      -    6 |                                 |
      -    7 |                                 |
      -    8 |                                 |
      -(8 rows)
      + dbid | relfilenode_dboid_relative_path |  size
      +------+---------------------------------+--------
      +    1 |                                 |
      +    2 |                                 |
      +    3 |                                 |
      +    4 |                                 |
      +    5 | 180273/112                      |  32768
      +    5 | 180273/113                      |  32768
      +    5 | 180273/12390                    |  65536
      +    5 | 180273/12390_fsm                |  98304
      
      <....choping output as very long...>
      
      +    5 | 180273/PG_VERSION               |      4
      +    5 | 180273/pg_filenode.map          |   1024
      +    6 |                                 |
      +    7 |                                 |
      +    8 |                                 |
      +(337 rows)
      
      ```
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      8d1460fc
  4. 31 12月, 2019 1 次提交
  5. 30 12月, 2019 1 次提交
    • X
      Fix a 'copy..reject limit N' bug · 933a68c5
      xiong-gang 提交于
      When 'CopyReadLineText' find a broken end-of-copy marker, it errors out without
      setting the current index in the buffer. In the case of 'reject limit' is set,
      copy will process the line again.
      933a68c5
  6. 28 12月, 2019 1 次提交
    • A
      Change default value of wal_sender_timeout GUC · aaaa18d4
      Ashwin Agrawal 提交于
      Based on reports from field for GPDB, 1 min of wal_sender_timeout GUC
      is causing primary to terminate the replication connection too often
      in heavy workload situations. This causes mirror to be marked down and
      piles up WAL on primary. This is moslty seen in configurations where
      fsync takes long time on mirrors. Hence, would be helpful to have
      higher default value of this GUC to avoid unnecessary marking mirror
      down situations. Only downside of this change would be when connection
      between primary and mirror exist but for some reason mirror doesn't
      respond, it will be detected little later compared to previous 1 min
      timeout. But 1 min timeout is causing major downside and mirrors need
      to be manually recovered after being marked down. Hence, its desirable
      to not falsely break the connectiion due to timeout.
      
      Increasing the timeout to 5 mins is just a educated guess as its hard
      to come up with reasonable default, but bumping the value is desired
      based on inputs.
      Reviewed-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
      aaaa18d4
  7. 27 12月, 2019 2 次提交
  8. 26 12月, 2019 3 次提交
    • N
      zstd: check for OOM when creating {C,D}Ctx · 712c6f36
      Ning Yu 提交于
      ZSTD creates CCtx and DCtx with malloc() by default, a NULL pointer will
      be returned on OOM, the callers must check for NULL pointers.
      
      Also fixed a typo in the comment.
      
      Fixes: https://github.com/greenplum-db/gpdb/issues/9294
      
      Reported-by: shellboy
      Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
      
      (cherry picked from commit d74aa39f)
      712c6f36
    • P
      Fix flaky test segspace. · 328e3954
      Paul Guo 提交于
      We recently start to control wal write burst by calling SyncRepWaitForLSN()
      more frequently, then the changes cause the segspace test flaky.
      
      In the segspace test case, there is an inject fault (exec_hashjoin_new_batch)
      with interrupt event, this makes the test easier to have the cancel event
      seen in SyncRepWaitForLSN() and then cause additional outputs sometimes.
      
      Fixing this by disabling the current cancel handling code if it is not a commit
      call of SyncRepWaitForLSN().
      
      Here is the diff of the test failure.
      
       begin;
       insert into segspace_t1_created
       SELECT t1.* FROM segspace_test_hj_skew AS t1, segspace_test_hj_skew AS t2 WHERE t1.i1=t2.i2;
      +DETAIL:  The transaction has already changed locally, it has to be replicated to standby.
       ERROR:  canceling MPP operation
      +WARNING:  ignoring query cancel request for synchronous replication to ensure cluster consistency
       rollback;
      
      Cherry-picked from 84642c4b
      
      Besides, add the commit parameter in SyncRepWaitForLSN() following the master
      code. Checked the related upstream patch. Adding the new parameter in current
      gpdb version should be fine.
      328e3954
    • N
      icw: fix flaky alter_db_set_tablespace · 4e757457
      Ning Yu 提交于
      The alter_db_set_tablespace test has been flaky for a long time, one
      typical failure is like this:
      
          --- /regress/expected/alter_db_set_tablespace.out
          +++ /regress/results/alter_db_set_tablespace.out
          @@ -1204,21 +1213,348 @@
           NOTICE:  dboid dir for database alter_db does not exist on dbid = 2
           NOTICE:  dboid dir for database alter_db does not exist on dbid = 3
           NOTICE:  dboid dir for database alter_db does not exist on dbid = 4
          -NOTICE:  dboid dir for database alter_db does not exist on dbid = 5
           NOTICE:  dboid dir for database alter_db does not exist on dbid = 6
           NOTICE:  dboid dir for database alter_db does not exist on dbid = 7
           NOTICE:  dboid dir for database alter_db does not exist on dbid = 8
      
      The test disables fts probing with fault injection, however it does not
      wait for the fault to be triggered.  The other problem is that the fts
      probing was disabled after the PANIC, that might not be in time.
      
      So the problem was that we were having a scenario where we were
      injecting the fault after the fts loop was beyond the fault point and
      then when the subsequent PANIC was caused, fts was still active.
      
      By manually triggering, and then by waiting to ensure that the fault is
      hit at least once, we can guarantee that the scenario described above
      doesn't happen.
      Reviewed-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
      Reviewed-by: NTaylor Vesely <tvesely@pivotal.io>
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      (cherry picked from commit 54e3af6d)
      4e757457
  9. 24 12月, 2019 3 次提交
    • (
      Fix `DELIMETER 'OFF'` validate error for external table. · a3573af8
      (Jerome)Junfeng Yang 提交于
      For below external table:
      ```
      CREATE EXTERNAL WEB TABLE web_ext ( junk text) execute 'echo hi' on master
          FORMAT 'text' (delimiter 'OFF' null E'\\N' escape E'\\');
      ```
      When querying the table, an unexpected error happens:
      ```
      SELECT * FROM web_ext;
      ERROR:  using no delimiter is only supported for external tables
      ```
      
      Since external scan calls BeginCopyFrom to init CopyStateData.
      When `ProcessCopyOptions` in `BeginCopy`, the relation may
      be an external relation.
      The fix checks whether the relation is an external relation and if yes,
      set the correct parameters for `ProcessCopyOptions`.
      a3573af8
    • A
      Make commit_blocking_on_standby test stable · e60cab6f
      Ashwin Agrawal 提交于
      Similar to commit 8c40565a, apply same change to
      commit_blocking_on_standby test as well. Unnecessary to check
      sync_state and it makes the test flaky, only way would be add reties
      instead not checking for same is better.
      e60cab6f
    • A
      Make dtm_recovery_on_standby test stable · d32116d9
      Ashwin Agrawal 提交于
      This test fails sometimes with below diff
      
      ```
      -- Sync state between master and standby must be restored at the end.
       select application_name, state, sync_state from pg_stat_replication;
        application_name | state     | sync_state
       ------------------+-----------+------------
      - gp_walreceiver   | streaming | sync
      + gp_walreceiver   | streaming | async
       (1 row)
      ```
      
      The reason being, if this query is excuted in window between when
      standby is created and changes state to streaming but yet to set flush
      to valid location based on reply from standby.
      pg_stat_get_wal_senders() reports sync_state as "async", if flush
      location is invalid pointer. Hence, we get the above diff sometimes in
      test based on timing.
      
      To fix the same, removing the sync_state field from above query for
      this test. As in GPDB we always create standby as sync only, secondly
      "state" is giving us what we wish to check here, which is if standby
      is up and running or not. Checking for sync_state is unnecessary and
      hence avoid the same and make test stable. If we have to keep the
      sync_state, would have to add unnecessary retry logic for this query.
      d32116d9
  10. 23 12月, 2019 3 次提交
    • Z
      Fix add partition relops null pointer issue. · 1a093429
      Zhenghua Lyu 提交于
      transformRelOptions may return a null pointer in some
      cases, add the check in function `add_partition_rule`.
      1a093429
    • H
      Remove Motion codepath to detoast HeapTuples, convert to MemTuple instead. · 54022630
      Heikki Linnakangas 提交于
      The Motion sender code has four different codepaths for serializing a
      tuple from the input slot:
      
      1. Fetch MemTuple from slot, copy it out as it is.
      
      2. Fetch MemTuple from slot, re-format it into a new MemTuple by fetching
         and inlining any toasted datums. Copy out the re-formatted MemTuple.
      
      3. Fetch HeapTuple from slot, copy it out as it is.
      
      4. Fetch HeapTuple from slot, copy out each attribute separately, fetching
         and inlining any toasted datums.
      
      In addition to the above, there are "direct" versions of codepaths 1 and 3,
      used when the tuple fits in the caller-provided output buffer.
      
      As discussed in https://github.com/greenplum-db/gpdb/issues/9253, the
      fourth codepath is very inefficient, if the input tuple contains datums
      that are compressed inline, but not toasted. We decompress such tuples
      before serializing, and in the worst case, might need to recompress them
      again in the receiver if it's written out to a table. I tried to fix that
      in commit 4c7f6cf7, but it was broken and was reverted in commit
      774613a8.
      
      This is a new attempt at fixing the issue. This commit removes codepath 4.
      altogether, so that if the input tuple is a HeapTuple with any toasted
      attributes, it is first converted to a MemTuple and codepath 2 is used
      to serialize it. That way, we have less code to test, and materializing a
      MemTuple is roughly as fast as the old code to write out the attributes
      of a HeapTuple one by one, except that the MemTuple codepath avoids the
      decompression of already-compressed datums.
      
      While we're at it, add some tests for the various codepaths through
      SerializeTuple().
      
      To test the performance of the affected case, where the input tuple is
      a HeapTuple with toasted datums, I used this:
      
      ---
      CREATE temporary TABLE foo (a text, b text, c text, d text, e text, f text,
        g text, h text, i text, j text, k text, l text, m text, n text, o text,
        p text, q text, r text, s text, t text, u text, v text, w text, x text,
        y text, z text, large text);
      ALTER TABLE foo ALTER COLUMN large SET STORAGE external;
      INSERT INTO foo
        SELECT 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',
               'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z',
               repeat('1234567890', 1000)
        FROM generate_series(1, 10000);
      
      -- verify that the data is uncompressed, should be about 110 MB.
      SELECT pg_total_relation_size('foo');
      
      \o /dev/null
      \timing on
      SELECT * FROM foo; -- repeat a few times
      ---
      
      The last select took about 380 ms on my laptop, with or without this patch.
      So the new codepath where the input HeapTuple is converted to a MemTuple
      first, is about as fast as the old method. There might be small differences
      in the serialized size of the tuple, too, but I didn't explicitly measure
      that. If you have a toasted but not compressed datum, the input must be
      quite large, so small differences in the datum header sizes shouldn't
      matter much.
      
      If the input HeapTuple contains any compressed datums, this avoids the
      recompression, so even if converting to a MemTuple was somewhat slower in
      that case, it should still be much better than before. I kept the
      HeapTuple codepath for the case that there are no toasted datums. I'm not
      sure it's significantly faster than converting to a MemTuple either; the
      caller has to slot_deform_tuple() the received tuple before it can do
      much with it, and that is slower with HeapTuples than MemTuples. But that
      codepath is straightforward enough that getting rid of it wouldn't save
      much code, and I don't feel like doing the performance testing to justify
      it right now.
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      54022630
    • H
      Remove unnecessary checks for NULL return from getChunkFromCache. · 8e78b8fb
      Heikki Linnakangas 提交于
      It cannot return NULL. It will either return a valid pointer, or the
      palloc() will ERROR out.
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      8e78b8fb
  11. 21 12月, 2019 3 次提交
    • G
      COPY: Allocate partition tuple slot in the per query context · 5b41310a
      ggbq 提交于
      Master having mode as COPY_DISPATCH incorrectly allocated the
      TupleTableSlot for a new ResultRelInfo of one partition on the
      per-tuple memory context. This can happen in the existence of
      ResultRelInfo::ri_partInsertMap. This causes crash because the
      per-tuple context will be reset for each tuple iteration. It should be
      using the per-query context.
      
      Reproduce the crash using the following SQL commands:
      
          DROP TABLE IF EXISTS partition_test;
      
          CREATE TABLE partition_test
          (
            id INT,
            tm TIMESTAMP
          )
          DISTRIBUTED BY (id)
          PARTITION BY RANGE(tm)
          (
          PARTITION p2019 START ('2019-01-01'::TIMESTAMP) END ('2020-01-01'::TIMESTAMP),
          DEFAULT PARTITION extra
          );
      
          ALTER TABLE partition_test ADD COLUMN dd TIMESTAMP;
          ALTER TABLE partition_test DROP COLUMN dd;
          ALTER TABLE partition_test ADD COLUMN dd TEXT;
      
          ALTER TABLE partition_test SPLIT DEFAULT PARTITION START ('2020-01-01'::TIMESTAMP) END ('2021-01-01'::TIMESTAMP)
           INTO (PARTITION p2020, DEFAULT PARTITION);
      
          COPY (SELECT generate_series, '2020-12-20'::TIMESTAMP, 'ABCDEF' FROM generate_series(1, 10000)) TO '/tmp/partition_test.txt';
          COPY partition_test FROM '/tmp/partition_test.txt';
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      5b41310a
    • H
      Fix version number in configure error message when ORCA is not found. · b5f8e7c3
      Heikki Linnakangas 提交于
      Commit 589c737e bumped the expected ORCA version number to 3.86, but
      forgot to update the error message.
      b5f8e7c3
    • M
      7b43d847
  12. 20 12月, 2019 5 次提交
  13. 19 12月, 2019 5 次提交
    • P
      Replace ExecFetchSlotHeapTuple()+heap_copytuple() with ExecCopySlotHeapTuple()... · e911d894
      Paul Guo 提交于
      Replace ExecFetchSlotHeapTuple()+heap_copytuple() with ExecCopySlotHeapTuple() in CopyFrom() (#9261)
      
      This saves a memory copy of tuple length for each tuple handling. I did not see
      big improvement with 'copy from' perf testing with this change ( 1%+ avg.
      running time reduction of 20 runs with table lenth 1k+) but it is still
      helpful.
      
      Reviewed-by: Ashwin Agrawal
      
      Cherry-picked from 9a0e6ab6
      e911d894
    • A
      In walreceiver, don't try to do ereport() in a signal handler. · 5fe98cc1
      Ashwin Agrawal 提交于
      This is cherry-pick of upstream commit
      a1a789eb, slightly modified to work
      with current GPDB code based of 9.4. Helps to fix the deadlocks seen
      in CI for walreceiver.
      
      -------------------
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Mon Apr 29 12:26:07 2019 -0400
      
          In walreceiver, don't try to do ereport() in a signal handler.
      
          This is quite unsafe, even for the case of ereport(FATAL) where we won't
          return control to the interrupted code, and despite this code's use of
          a flag to restrict the areas where we'd try to do it.  It's possible
          for example that we interrupt malloc or free while that's holding a lock
          that's meant to protect against cross-thread interference.  Then, any
          attempt to do malloc or free within ereport() will result in a deadlock,
          preventing the walreceiver process from exiting in response to SIGTERM.
          We hypothesize that this explains some hard-to-reproduce failures seen
          in the buildfarm.
      
          Hence, get rid of the immediate-exit code in WalRcvShutdownHandler,
          as well as the logic associated with WalRcvImmediateInterruptOK.
          Instead, we need to take care that potentially-blocking operations
          in the walreceiver's data transmission logic (libpqwalreceiver.c)
          will respond reasonably promptly to the process's latch becoming
          set and then call ProcessWalRcvInterrupts.  Much of the needed code
          for that was already present in libpqwalreceiver.c.  I refactored
          things a bit so that all the uses of PQgetResult use latch-aware
          waiting, but didn't need to do much more.
      
          These changes should be enough to ensure that libpqwalreceiver.c
          will respond promptly to SIGTERM whenever it's waiting to receive
          data.  In principle, it could block for a long time while waiting
          to send data too, and this patch does nothing to guard against that.
          I think that that hazard is mostly theoretical though: such blocking
          should occur only if we fill the kernel's data transmission buffers,
          and we don't generally send enough data to make that happen without
          waiting for input.  If we find out that the hazard isn't just
          theoretical, we could fix it by using PQsetnonblocking, but that
          would require more ticklish changes than I care to make now.
      
          This is a bug fix, but it seems like too big a change to push into
          the back branches without much more testing than there's time for
          right now.  Perhaps we'll back-patch once we have more confidence
          in the change.
      
          Patch by me; thanks to Thomas Munro for review.
      
          Discussion: https://postgr.es/m/20190416070119.GK2673@paquier.xyz
      -------------------
      5fe98cc1
    • R
      efe96f02
    • D
      Docs - removing unused passwordcheck docs source · 32dbb993
      David Yozie 提交于
      32dbb993
    • A
      Silence compiler warning in pg_upgrade unit tests. · ec3282ea
      Adam Berlin 提交于
      ec3282ea
  14. 18 12月, 2019 3 次提交
    • A
      Revert "When serializing a tuple for Motion, don't decompress compressed datums." · e12740fe
      Asim R P 提交于
      This reverts commit c34e76e4.
      
      Thank you Ekta for finding this simple repro that demonstrates the
      problem with this patch and Jesse for initial analysis:
      
         CREATE TABLE foo(a text, b text);
         INSERT INTO foo SELECT repeat('123456789', 100000)::text as a,
                                repeat('123456789', 10)::text as b;
         SELECT * FROM foo;
      
      The motion receiver has no idea whether a datum it received is
      compressed or not, because the varlena header is stripped off before
      sending the data.  Heikki and I discussed two options to fix this:
      
      1. Include varlena header when sending.  This incurs at the most
      8-byte overhead per variable length datum in a heap tuple.
      
      2. Always send tuples as MemTuples.  This is more desirable because it
      simplifies code, but also comes with performance cost.
      
      Let's evaluate the two options based on performance and then commit the
      best one.
      e12740fe
    • H
      Rewrite test case xlog files (#9248) · 7d06c16f
      Hao Wu 提交于
      * Revert "Increase default value of wal_keep_segments GUC."
      This reverts commit dd18c4a0.
      We have replication slot now, wal_keep_segments is not needed for
      safely removing WAL files.
      
      * Rewrite test missing_xlog
      Old missing_xlog tries to test an error happened on the primary
      when requesting for WAL files from the mirror are unavailable.
      To simulate this scenario, the test moves WAL files to other folder,
      encounter the error and moves them back. But it may damage the WAL
      files on the primary, because it assumes that there is no WAL write
      between the 2 movings of WAL files. Unluckily, the assumption is false.
      The hint bits could trigger a WAL write request(FPI_FOR_HINT), and
      the hot standby may also requst a WAL write. So moving WAL files is
      not recommended.
      
      To make `sync_error` occur, we set `wal_keep_segments` to 0,
      i.e. disable this feature. And in order to temporarily break
      the constraint of replication slot, to delete WAL files used by
      the mirror, we use fault inject. In the end, the mirror loses some
      WAL file to do incremently recovery, we do full recovery for the mirror.
      
      * Add 'wal_keep_segments = 5' to configuration of test pg_rewind
      Since wal_keep_segments is set to 0 by default, WAL files are
      only guarded by replication slot. In test case pg_rewind, there is
      no replication slot found now. A workaround way is to set
      wal_keep_segments back to 5 in the test case.
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      (cherry picked from commit 06686422)
      7d06c16f
    • L
      docs - updates to configuring windows client for kerberos auth docs (#8545) · 8ba98ebc
      Lisa Owen 提交于
      * docs - updates to configuring windows client for kerberos auth docs
      
      * need to set PGGSSLIB before running greenplum_clients_path.bat
      
      * add the AD info back into the file, but comment out
      
      * misc edit
      
      * identify config step for custom kerb config file location
      
      * include automated task example in generating keytab section
      
      * note that an additional step requires admin privileges
      
      * reinstate, but comment out, an AD statement
      
      * remove dangling reference to AD user name
      8ba98ebc
  15. 17 12月, 2019 6 次提交