- 01 11月, 2017 8 次提交
-
-
由 David Yozie 提交于
* Docs - editing resource groups warning for RHEL 6 * removing most RG experiental warnings; addressing remaining SuSE issue and experimental status on that platform
-
由 Haisheng Yuan 提交于
Previously, if the input query contains catalog table, the minirepro will not dump statistics for catalog table. We may generate different plan with customer's environment because lack of statistics. With this patch, the statistics for catalog table used in the input query will also be dumped.
-
由 Fenggang 提交于
-
由 Lav Jain 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Jacob Champion 提交于
LOG_MSG is called a lot during gpinitsystem (>400 times on my machine), and the (echo | awk | tr | grep) dance gets expensive, especially when we do it four times per call. We can do all this stuff with a single pass in native bash, which saves ten to fifteen seconds of runtime for me when creating a demo-cluster. Along the same lines, call `date` only once for a timestamp.
-
由 Jacob Champion 提交于
Instead of sleeping for ten seconds, wait for the work_queue to finish with a timeout of ten seconds (this way we return immediately once the queue has finished). This speeds up gpstop by about fifteen seconds on my machine. This solution uses a workaround to Queue.join()'s lack of timeout; see https://bugs.python.org/issue9634
-
由 Shreedhar Hardikar 提交于
explain.pl annotated the edges for nestloop only. This commit expands that to include any operator with multiple children. This way in the case the left/right children are swapped, it is clearly visible. The script also considers InitPlans as regular children, so they must handled specially for labeling and be ignored. Also fixed the incorrect assumption that all InitPlans belong to a different slice, but that is not always the case.
-
- 31 10月, 2017 1 次提交
-
-
由 Ning Yu 提交于
CID 178328: Integer handling issues (OVERFLOW_BEFORE_WIDEN) - fixed by using correct integer type in the expression. Also updated some out-of-date comments.
-
- 01 11月, 2017 1 次提交
-
-
由 Jasper Li 提交于
gpload_test directory at gpMgmt/bin/gpload_test/ is part of the "make install" target. It is not fatal for publishing a debian (.deb) package of open-source Greenplum, but it makes the .deb larger and is embarrassing.
-
- 31 10月, 2017 13 次提交
-
-
由 Heikki Linnakangas 提交于
These don't seem terribly interesting, so we probably could remove these. But they run fast, and I'm not sure if we have coverage for the exact same cases in the main test suite already. So keep them for now, but move them out of TINC.
-
由 Heikki Linnakangas 提交于
I don't see the value of these tests. We have tests on triggers in the 'triggers' pg_regress test already. Cases that just fall back from ORCA to the Postgres planner don't seem very interesting, and in particular, I don't see the point of testing specifically that they do fall back.
-
由 Heikki Linnakangas 提交于
Most of the code and test data in src/test/tinc/tincrepo/query/indexapply was unused. AFAICS, only the two "mpp21852" tests were run. Move those tests to the pg_regress regression suite.
-
由 Zhenghua Lyu 提交于
Non-superuser should not be able to execute pg_resgroup_get_status_kv.
-
由 Dhanashree Kashid 提交于
Tests "qp_query_execution" and "qp_correlated_query" are run in parallel in ICG. Excerpt from greenplum_schedule file: ``` test: qp_functions qp_misc_rio_join_small qp_misc_rio qp_correlated_query qp_targeted_dispatch qp_gist_indexes2 qp_gist_indexes3 qp_gist_indexes4 qp_query_execution ``` If the timing is wrong, they conflict with each other and causes plan differences in qp_correlated_query. Both these tests create a relation named "B" in their own namespaces; however qp_query_execution later updates the reltuples in pg_class for "B". This update command only uses relname to locate entry for "B" in pg_class and updates its tuple count to a large value. This update results in updating the reltuples for both relations "B" (in namespace qp_query_execution and qp_correlated_query). This causes intermittent EXPLAIN test failures in qp_correlated_query making it flaky. This commit fixes the problem by using relnamespace as well while updating the pg_class to uniquely identify "B". Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Zhenghua Lyu 提交于
The default value is actually a recommendation. To be safe to cluster and make the segment memory usage is close to old experienced value, we will set this GUC default to 0.7.
-
由 Dhanashree Kashid 提交于
Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Chuck Litzell 提交于
* Consolidate XML transform examples with gpfdist docs in the admin guide * Minor edit * Review comments and relocate to load section * Fixes links to relocated file
-
由 Chris Hajas 提交于
These tags were never used. This will be backported to 5X_STABLE.
-
由 Heikki Linnakangas 提交于
Despite the name, "dml_boundary_intarray", this test doesn't go anywhere near any limits. And we have tests for UPDATEs on arrays in the 'arrays' test.
-
由 Lisa Owen 提交于
-
由 Todd Sedano 提交于
Prior to this change, make create-demo-cluster would fail since the command line could not find pip.
-
由 Marbin Tan 提交于
gpcheckcat is already on terraform. This change was missed when moving gpcheckcat from pulse to terraform.
-
- 30 10月, 2017 14 次提交
-
-
由 Chuck Litzell 提交于
* Consolidate XML transform examples with gpfdist docs in the admin guide * Minor edit * Review comments and relocate to load section
-
由 Adam Lee 提交于
-
由 Heikki Linnakangas 提交于
In commit 226e8867, I changed the CatMissingIssue object to hold the content IDs of segments where an entry is missing in a Python list, instead of the string representation of a PostgreSQL array (e.g. "{1,2,-1}") that was used before. That was a nice simplification, but it turns out that there was more code that accessed the CatMissingIssue.segids field that I missed. It would make sense to change the rest of the code, IMHO, but to make the CI pipeline happy quickly, this commit just changes the code back to using a string representation of a PostgreSQL array again. This hopefully fixes the MM_gpcheckcat behave test failures.
-
由 Adam Lee 提交于
Use macro WIN32 to bypass some codes like poll.
-
由 Jialun 提交于
gpload error count is incorrect when more than one segment has format error, for the cmdtime is different, and only errors with the newest cmdtime is counted. So we add startTime which will be used for counting all the errors occured during the same gpload operation.
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
SUSE needs header files for off_t and Windows has no poll. (cherry picked from commit 222d9c6dc63421c6aa2006ee02f4a18848cfc2f8)
-
由 Ning Yu 提交于
On low end system with 1~2 cpu cores the new queries in a cold resgroup can suffer from a high latency when the overall load is very high. The root cause is that we used to set very high cpu priority for gpdb cgroups, so non gpdb process are scheduled with very low priority and high latency. GPDB processes are also affected by this because postmaster and other auxiliary are not put into gpdb cgroups. Even for QD and QEs they are not put into a gpdb cgroup until their transaction is began. To fix this we made below changes: * put postmaster and all its children processes into the toplevel gpdb cgroup; * provide a GUC to control the cgroup cpu priority for gpdb processes when resgroup is enabled; * set a lower cpu priority by default;
-
由 Adam Lee 提交于
1, QD to QD's connection user is environment variable PGUSER, we need to set it to session user in dblink. 2, QD to QD's unix domain socket connection doesn't require any authentication, request non-superuser to provide host to use TCP/UDP connections. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Adam Lee 提交于
mock.mk before this has trouble to filter out mocked objects from src/backend/objfiles.txt, because the filenames in it have redundant "src/backend/../../" and suffix "_for_backend". This commit removes them before mocking to make it work.
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Adam Lee 提交于
commit b0328d5631088cca5f80acc8dd85b859f062ebb0 Author: mcdevc <a@b> Date: Fri Mar 6 16:28:45 2009 -0800 Separate our internal libpq front end from the client libpq library upgrade libpq to the latest to pick up bug fixes and support for more client authentication types (GSSAPI, KRB5, etc) Upgrade all files dependent on libpq to handle new version. Above is the initial commit of gp_libpq_fe, seems no good reasons still having it. Key things this PR do: 1, remove the gp_libpq_fe directory. 2, build libpq source codes into two versions, for frontend and backend, check the macro FRONTEND. 3, libpq for backend still bypasses local authentication, SSL and some environment variables, and these are the whole differences. Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Pengzhou Tang 提交于
-
由 Heikki Linnakangas 提交于
In commit 226e8867, I changed the shape of the result set passed to the processMissingDuplicateEntryResult() function, removing the "exists" column. But I failed to update the line that extracts the primary key columns from the result set for that change. Fix. This should fix the failures in the gpcheckcat behave tests.
-
- 29 10月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
The test had become useless somewhere along the years. The bug was that if ORCA fell back to the planner, then the check that you cannot update a distribution key column with the planner would not be made, and you could end up with incorrectly distributed rows. The test used a multi-level partitioned table as the target, because when the test was originally written, multi-level partitioning was not supported by ORCA. But at some point, support for that was added, so the test no longer tested the original bug it was written for. Rewrite the test using a different feature that ORCA falls back on, add comments to make it more clear what this is supposed to test so that it won't be broken so easily again. And finally, move the test out of TINC, into the main regression suite, which is what I was doing when I realized that it was broken altogether.
-
由 Taylor Vesely 提交于
The ephemeral port range is given by net.ipv4.ip_local_port_range kernel parameter. It is set to 32768 --> 60999. If GPDB uses port numbers in this range, FTS probe request may not get a response, resulting in FTS incorrectly marking a primary down. We change the example configuration files to lower the port number to proper range. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Heikki Linnakangas 提交于
Since commit 4a95afc1, a serializable transaction no longer establishes the snapshot at the SET TRANSACTION ISOLATION LEVEL SERIALIZABLE command. Now it establishes a snapshot at the first "real" query that requires a snapshot. The new behavior matches PostgreSQL, and is a good thing. So silence the test failures, by adding dummy queries to establish snapshots at the same spots as before. I can't make all of these tests pass on my laptop, even before that commit, so I'm not sure if this fixes them all correctly. But I think so, and a few of these I could even verify locally.
-