1. 31 10月, 2017 10 次提交
  2. 30 10月, 2017 14 次提交
    • C
      Consolidate XML transform examples with gpfdist docs in the admin guide (#3601) · 64ce04e7
      Chuck Litzell 提交于
      * Consolidate XML transform examples with gpfdist docs in the admin guide
      
      * Minor edit
      
      * Review comments and relocate to load section
      64ce04e7
    • A
      Optimize header files in fe-connect.c for Windows Clients · abdf7600
      Adam Lee 提交于
      abdf7600
    • H
      Use string representation of segment ids in CatMissingIssue object. · 792a9b43
      Heikki Linnakangas 提交于
      In commit 226e8867, I changed the CatMissingIssue object to hold the
      content IDs of segments where an entry is missing in a Python list, instead
      of the string representation of a PostgreSQL array (e.g. "{1,2,-1}") that
      was used before. That was a nice simplification, but it turns out that
      there was more code that accessed the CatMissingIssue.segids field that I
      missed. It would make sense to change the rest of the code, IMHO, but to
      make the CI pipeline happy quickly, this commit just changes the code back
      to using a string representation of a PostgreSQL array again.
      
      This hopefully fixes the MM_gpcheckcat behave test failures.
      792a9b43
    • A
      Fix Clients for Windows pipeline · 7dcf6c25
      Adam Lee 提交于
      Use macro WIN32 to bypass some codes like poll.
      7dcf6c25
    • J
      fix bug gpload error count (#3629) · c9977c1d
      Jialun 提交于
      gpload error count is incorrect when more than one segment has format
      error, for the cmdtime is different, and only errors with the newest
      cmdtime is counted.
      
      So we add startTime which will be used for counting all the errors
      occured during the same gpload operation.
      c9977c1d
    • A
      Remove unnecessary header files in fe-misc.c · 00dbc905
      Adam Lee 提交于
      00dbc905
    • A
      Fix SUSE and Windows pipeline caused by retiring gp_libpq_fe · 719a1cca
      Adam Lee 提交于
      SUSE needs header files for off_t and Windows has no poll.
      
      (cherry picked from commit 222d9c6dc63421c6aa2006ee02f4a18848cfc2f8)
      719a1cca
    • N
      Fix a resgroup performance issue. · 0b85b9d0
      Ning Yu 提交于
      On low end system with 1~2 cpu cores the new queries in a cold resgroup
      can suffer from a high latency when the overall load is very high.
      
      The root cause is that we used to set very high cpu priority for gpdb
      cgroups, so non gpdb process are scheduled with very low priority and
      high latency. GPDB processes are also affected by this because
      postmaster and other auxiliary are not put into gpdb cgroups. Even for
      QD and QEs they are not put into a gpdb cgroup until their transaction
      is began.
      
      To fix this we made below changes:
      * put postmaster and all its children processes into the toplevel
        gpdb cgroup;
      * provide a GUC to control the cgroup cpu priority for gpdb processes
        when resgroup is enabled;
      * set a lower cpu priority by default;
      0b85b9d0
    • A
      Fix dblink panic when connecting as a non-superuser · 9b758447
      Adam Lee 提交于
      1, QD to QD's connection user is environment variable PGUSER, we need to
      set it to session user in dblink.
      
      2, QD to QD's unix domain socket connection doesn't require any
      authentication, request non-superuser to provide host to use TCP/UDP
      connections.
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      9b758447
    • A
      Update mock makefiles to work with libpq objfiles · 0ed93335
      Adam Lee 提交于
      mock.mk before this has trouble to filter out mocked objects from
      src/backend/objfiles.txt, because the filenames in it have redundant
      "src/backend/../../" and suffix "_for_backend".
      
      This commit removes them before mocking to make it work.
      0ed93335
    • A
      Retire gp_libpq_fe part 2, changing including path · 974c414e
      Adam Lee 提交于
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      974c414e
    • A
      Retire gp_libpq_fe part 1, libpq itself · 510a20b6
      Adam Lee 提交于
          commit b0328d5631088cca5f80acc8dd85b859f062ebb0
          Author: mcdevc <a@b>
          Date:   Fri Mar 6 16:28:45 2009 -0800
      
              Separate our internal libpq front end from the client libpq library
              upgrade libpq to the latest to pick up bug fixes and support for more
              client authentication types (GSSAPI, KRB5, etc)
              Upgrade all files dependent on libpq to handle new version.
      
      Above is the initial commit of gp_libpq_fe, seems no good reasons still
      having it.
      
      Key things this PR do:
      
      1, remove the gp_libpq_fe directory.
      2, build libpq source codes into two versions, for frontend and backend,
      check the macro FRONTEND.
      3, libpq for backend still bypasses local authentication, SSL and some
      environment variables, and these are the whole differences.
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      510a20b6
    • P
      resgroup: enable resource group test in ICW. · 15fdc144
      Pengzhou Tang 提交于
      15fdc144
    • H
      Fix gpcheckcat error reporting of missing entries entries. · 8199a402
      Heikki Linnakangas 提交于
      In commit 226e8867, I changed the shape of the result set passed to the
      processMissingDuplicateEntryResult() function, removing the "exists" column.
      But I failed to update the line that extracts the primary key columns from
      the result set for that change. Fix.
      
      This should fix the failures in the gpcheckcat behave tests.
      8199a402
  3. 29 10月, 2017 3 次提交
    • H
      Fix and move test case for MPP-22599. · 34c4d9f7
      Heikki Linnakangas 提交于
      The test had become useless somewhere along the years. The bug was that
      if ORCA fell back to the planner, then the check that you cannot update
      a distribution key column with the planner would not be made, and you
      could end up with incorrectly distributed rows. The test used a multi-level
      partitioned table as the target, because when the test was originally
      written, multi-level partitioning was not supported by ORCA. But at some
      point, support for that was added, so the test no longer tested the
      original bug it was written for.
      
      Rewrite the test using a different feature that ORCA falls back on, add
      comments to make it more clear what this is supposed to test so that it
      won't be broken so easily again. And finally, move the test out of TINC,
      into the main regression suite, which is what I was doing when I realized
      that it was broken altogether.
      34c4d9f7
    • T
      Port numbers used by GPDB should be below kernel's ephemeral port range · 4b439bc9
      Taylor Vesely 提交于
      The ephemeral port range is given by net.ipv4.ip_local_port_range kernel
      parameter. It is set to 32768 --> 60999. If GPDB uses port numbers in this
      range, FTS probe request may not get a response, resulting in FTS incorrectly
      marking a primary down.
      
      We change the example configuration files to lower the port number to proper
      range.
      Signed-off-by: NAsim R P <apraveen@pivotal.io>
      4b439bc9
    • H
      Fix tests, for changes in snapshot behavior of serializable transactions. · 3d7fde3f
      Heikki Linnakangas 提交于
      Since commit 4a95afc1, a serializable transaction no longer establishes
      the snapshot at the SET TRANSACTION ISOLATION LEVEL SERIALIZABLE command.
      Now it establishes a snapshot at the first "real" query that requires a
      snapshot. The new behavior matches PostgreSQL, and is a good thing. So
      silence the test failures, by adding dummy queries to establish snapshots
      at the same spots as before.
      
      I can't make all of these tests pass on my laptop, even before that commit,
      so I'm not sure if this fixes them all correctly. But I think so, and a few
      of these I could even verify locally.
      3d7fde3f
  4. 28 10月, 2017 13 次提交
    • H
      When dispatching, send ActiveSnapshot along, not some random snapshot. · 4a95afc1
      Heikki Linnakangas 提交于
      If the caller specifies DF_WITH_SNAPSHOT, so that the command is dispatched
      to the segments with a snapshot, but it currently has no active snapshot in
      the QD itself, that seems like a mistake.
      
      In qdSerializeDtxContextInfo(), the comment talked about which snapshot to
      use when the transaction has already been aborted. I didn't quite
      understand that. I don't think the function is used to dispatch the "ABORT"
      statement itself, and we shouldn't be dispatching anything else in an
      already-aborted transaction.
      
      This makes it more clear which snapshot is dispatched along with the
      command. In theory, the latest or serializable snapshot can be different
      from the one being used when the command is dispatched, although I'm not
      sure if there are any such cases in practice.
      
      In the upcoming 8.4 merge, there are more changes coming up to snapshot
      management, which make it more difficult to get hold of the latest acquired
      snapshot in the transaction, so changing this now will ease the pain of
      merging that.
      
      I don't know why, but after making the change in qdSerializeDtxContextInfo,
      I started to get a lot of "Too many distributed transactions for snapshot
      (maxCount %d, count %d)" errors. Looking at the code, I don't understand
      how it ever worked. I don't see any no guarantee that the array in
      TempQDDtxContextInfo or TempDtxContextInfo was pre-allocated correctly.
      Or maybe it got allocated big enough to hold max_prepared_xacts, which
      was always large enough, but it seemed rather haphazard to me. So in
      the spirit of "if you don't understand it, rewrite it until you do", I
      changed the way the allocation of the inProgressXidArray array works.
      In statically allocated snapshots, i.e. SerializableSnapshot and
      LatestSnapshot, the array is malloc'd. In a snapshot copied with
      CopySnapshot(), it is points to a part of the palloc'd space for the
      snapshot. Nothing new so far, but I changed CopySnapshot() to set
      "maxCount" to -1 to indicate that it's not malloc'd. Then I modified
      DistributedSnapshot_Copy and DistributedSnapshot_Deserialize to not give up
      if the target array is not large enough, but enlarge it as needed. Finally,
      I made a little optimization in GetSnapshotData() when running in a QE, to
      move the copying of the distributed snapshot data to outside the section
      guarded by ProcArrayLock. ProcArrayLock can be heavily contended, so that's
      a nice little optimization anyway, but especially now that
      DistributedSnapshot_Copy() might need to realloc the array.
      4a95afc1
    • H
      Don't use a temp table in gpccheckcat, when checking for missing entries. · 226e8867
      Heikki Linnakangas 提交于
      The new query is simpler. There was a comment about using the temp table
      to avoid gathering all the data to the master, but I don't think that is a
      good tradeoff. Creating a temp table is pretty expensive, and even with
      the temp table, the master needs to broadcast all the master's entries from
      to the segments. For comparison, with the Gather node, all the segments
      need to send their entries to the master. Isn't that roughly the same
      amount of traffic?
      
      A long time ago, the query was made to use the temp table, after a report
      from a huge cluster with over 1000 segments, where the total size of
      pg_attribute, across all the nodes, was over 200 GB. So the catalogs can
      be large. But even then, I don't think this query can get much better than
      this.
      
      The new query moves some of the logic from SQL to the Python code. Seems
      simpler that way.
      
      The real reason to do this right now is that in the next commit, I'm
      going to change the way snapshots are dispatched with a query, and that
      change will change the visibility of the temp table that was created in
      the same command. In a nutshell, currently, if you do "CREATE TABLE mytemp
      AS SELECT oid FROM pg_class WHERE relname='mytemp'", the oid of the table
      being created is included. On PostgreSQL, and after the snapshot changes
      I'm working on, it will not be. And would confuse this gpcheckcat query.
      226e8867
    • H
      Remove leftover expected output files. · 909d66a2
      Heikki Linnakangas 提交于
      Commit ce6aafb0 removed these tests.
      909d66a2
    • D
      Remove stale FIXME from test answer file · fa86e469
      Dhanashree Kashid 提交于
      The original issue for which the FIXME was added is fixed in ORCA
      v2.46.2 and commit 8978e73c updated the test answer files with correct
      plan and results.
      
      Hence the FIXME is no longer valid.
      fa86e469
    • K
      Remove python installation directory when destroying cluster · 748f32a7
      Karen Huddleston 提交于
      In the backup_43_restore_5 test, which uses different python versions,
      we were not properly removing the previous python packages. When running
      behave on the restore side, packages for python 2.7 were not installed
      since the directory was already present.
      Signed-off-by: NChris Hajas <chajas@pivotal.io>
      748f32a7
    • L
      remove gp_ from gp_statement_mem references (#3660) · 517fccd1
      Lisa Owen 提交于
      517fccd1
    • D
      Fix assert caused by UPDATE query with `with oids` target table · 9f3d4903
      Dhanashree Kashid 提交于
      Fix failure by non-proper has_oids value when executing nodeSplitUpdate
      when optimizer=on.  has_oids is different between nodes across Motion,
      so that receiver motion cannot retrieve correct value from sender
      motion.
      Signed-off-by: NYu Yang <macroyuyang@pivotal.io>
      9f3d4903
    • M
      Sub-partition exchange with wrong schema throws · e384f3be
      Melanie Plageman 提交于
      Partition exchange for a partition table with sub-partitions should
      throw an error as soon as the incompatability is schemas between the
      partition table and the candidate table to exchange is found.
      Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
      e384f3be
    • M
      Add nested CTE tests · a4efed31
      Melanie Plageman 提交于
      - Non-recursive CTE nested in non-recursive CTE
      - Non-recursive CTE nested in recursive CTE
      - Recursive CTE nested in non-recursive CTE
      - Recursive CTE nested in recursive CTE
      a4efed31
    • D
      Move CTE related tests out of TINC · 7bea6dec
      Dhanashree Kashid 提交于
      Move CTE tests out of TINC and add (the ones that are not already
      present) to main test suite.  optimizer_functional_part1 runs 58 CTE
      tests which can be categorized as :
      
      - cte_queries_<1-24>
      Almost all these tests (with one exception as described next) are
      already in the main test suite in `qp_with_clause.sql` which means we
      had duplicate tests. Hence this commit removes them from TINC.
      
      Two queries in `cte_queries24.sql`, query#6 and query#11 were missing in
      `qp_with-clause.sql`. This commit adds them.
      
      - cte_functest_<58-60>
      In the old commit 629f7e76, these tests were not
      moved since they produced different results on different invocations and
      were marked as skipped tests. They indeed have different behavior with
      Planner and ORCA. ORCA does not support updating rows when there are
      multiple matches in the join condition and erros out. Planner on the
      other hand allows this but the results are non-deterministic.  There is
      already coverage for this error in qp_dml_joins.sql and
      DML_over_joins.sql.  Hence this commit drops these tests and does not
      move to main test suite.
      
      - enable_cte_plan_space
      This is a pure ORCA test which tests the number of plan alternatives
      produced by ORCA when optimizer_cte_inlining is ON and OFF. This will be
      moved in ORCA test suite
      
      - icg_cte_with_values
      The test is already present in `with.sql`.
      
      - cte_functest_<22-23>_inlining_<enabled/disabled>
      Moved these tests to `qp_with_functional.sql`. qp_with_functional is
      tested with both inlining and noinlining.
      
      This commit thus removes the `cte` test folder from TINC
      7bea6dec
    • J
      Remove aoco_compression TINC tests · 527a11f2
      Jimmy Yih 提交于
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      527a11f2
    • X
      Remove aoco_compression targets from TINC Makefile · c2b4fdbd
      Xin Zhang 提交于
      Signed-off-by: NJimmy Yih <jyih@pivotal.io>
      c2b4fdbd
    • J
      Remove aoco_compression TINC tests from pipeline · b6065f50
      Jimmy Yih 提交于
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      b6065f50