- 29 3月, 2018 6 次提交
-
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
FTS went in infinite loop without this fix on probe if primary failed to respond back to probe request. Encountered the issue using suspend fault for fts message handler.
-
由 Ashwin Agrawal 提交于
If primary has promote file and pg_basebackup copies over the same, then due to existency of the same mirror gets auto-promoted which is very dangerous. Hence avoid copying over promote file.
-
由 Ashwin Agrawal 提交于
Not seeing reason for blocking visibility of this guc. Similar to all other fts gucs, its useful.
-
由 Ashwin Agrawal 提交于
If infinite_loop fault is set and shut-down is requested, bail-out instead of waiting and needing forced kill. It was this way till commit ae760e25 removed the `IsFtsShudownRequested()` check. Didn't intentionaly change the suspend fault as it never had shut-down check and don't wish to change behavior of any tests using the same.
-
- 28 3月, 2018 6 次提交
-
-
由 Asim R P 提交于
The DTX_STATE_FORCED_COMMITTED was identical to DTX_STATE_INSERTED_COMMITTED.
-
由 Asim R P 提交于
Remove a log message to indicate if a QE reader is writing an XLOG record. Back in GPDB 4.3 when lazy XID feature didn't exist, a QE reader would be assigned a valid transaction ID. That could lead to extending CLOG and generating XLOG. This case no longer applies to GPDB.
-
由 Asim R P 提交于
The command "COPY enumtest FROM stdin;" hit an infinite loop on merge branch. Code indicates that the issue can happen on master as well. QD backend went into infinite loop when the connection was already closed from QE end. The TCP connection was in CLOSE_WAIT state. Libpq connection status was CONNECTION_BAD and asyncStatus was PGASYNC_BUSY. Fix the infinite loop by checking libpq connection status in each iteration.
-
由 Karen Huddleston 提交于
Authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
This commit introduces a GUC `optimizer_enable_associativity` to enable or disable join associativity. Join Associativity increases the search space as it increases the numbers of groups to represent a join and its associative counterpart, i.e (A X B) X C ~ A X (B X C). This patch, by default disables join associativity transform, if required the users can enable the transform. There are few plan changes which are observed due to this change. However, further evaluation of the plan changes revealed that even though the cost of the the resulting plan has increased, the execution time went down by 1-2 seconds. For the queries with plan changes, there are 3 tables which are joined, i.e A, B and C. If we increase the number of tuples returned by the subquery which forms A', we see the old plan. But if the tuples in relation B and C is significantly higher, the plan changes with the patch yeild faster execution times. This suggests that we may need to tune the cost model to adapt to such cases. The plan cost increase is 1000x as compared to the old plans, this 1000x factor is due to the value of `optimizer_nestloop_factor=1024`, if you set the value of the GUC `optimizer_nestloop_factor=1`, the plan before or after the patch remains same.
-
由 Ashwin Agrawal 提交于
Thank You Heikki for pointing out the presence of `gpxlogloc` data type to compare xlog locations instead of exiting hacks in test.
-
- 27 3月, 2018 5 次提交
-
-
由 Peifeng Qiu 提交于
When gpload finishes its query, it will send SIGTERM to gpfdist. gpfdist handle SIGTERM with exit(1), which will invoke registered apr handlers and cleanup all apr resources including apr_pool. If this happens just during normal destruction of apr_pool in do_close, gpfdist will hang. Call _exit in gpfdist to avoid any cleanup handlers, and let gpload send SIGKILL to perform hard kill.
-
由 Joao Pereira 提交于
.Smaller image size .Change the role from gp to gpadmin .Remove libraries that are not needed
-
由 Ashwin Agrawal 提交于
Test failed few times randomly in CI and newly added debug log revealed flaw with text comparison of xlog location. For example text comparison considers this "1/103BE80" xlog location as smaller than "1/FEE230" and hence test fails. So, instead replaced the logic with other hack to convert the location to hex and then compare, which should serve th purpose till we get to 9.4.
-
由 Chris Hajas 提交于
pg_dump is currently returning the following error as the parlevel was not being used in the query for backend version 9.0: pg_dump: column number -1 is out of range 0..20 Also, fix the query that was running on GPDB4. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Mel Kiyama 提交于
docs: pl/container - note about OOM message displayed when PID 1 terminated on older docker installs (#4758)
-
- 24 3月, 2018 2 次提交
-
-
由 Goutam Tadi 提交于
- Use nocheck to skip `make check` in test - Use parallel to use 6 parallel processes
-
由 Shreedhar Hardikar 提交于
Although standard SQL ignores the ORDER BYs in views (and sub-selects), PostgreSQL and thus GPDB preserves them. For the query in create_view.sql, the expected output will be sorted according to the ORDER BY clause in the view definition. But, if the rows come in the wrong order from the view then gpdiff.pl will not report it as an error. Remove the FIXME in this file, since there is no way to enforce the order via gpdiff.pl. Instead, to test the output row ordering, this commit modifies the queries in gp_create_view.sql and adds an order-sensitive operator - row_number(). Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
- 23 3月, 2018 6 次提交
-
-
由 Lisa Owen 提交于
* docs - add cancel/term backend content * address comments from daniel * use single l in cancel tenses
-
由 Sambitesh Dash 提交于
Signed-off-by: NSambitesh Dash <sdash@pivotal.io> Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Lisa Owen 提交于
-
由 Ashwin Agrawal 提交于
FTS starts probing as soon as fts process gets created and triggers every minute. No reason to trigger and wait for one probe cycle on first connection which calls initTM(). If for some reason fail to dispatch command to segment, dispatcher will trigger probe anyways, so why slow things down upfront.
-
由 Ashwin Agrawal 提交于
I am not sure on usage of this GUC gp_set_read_only. When set it sets FTS to read-only which is then used while starting any transaction and converted to read-only. Seems better to achieve the same result using GUC `default_transaction_read_only` or `transaction_read_only`, which can be used at session-level as well instead of full system-wide setting.
-
由 Ashwin Agrawal 提交于
-
- 22 3月, 2018 15 次提交
-
-
由 Richard Guo 提交于
-
由 Pengzhou Tang 提交于
Previously, to avoid the leak of the gang if someone terminates the query in the middle of gang creation, we added a global pointer named CurrentGangCreating so the partially created gang can also be destroyed at the end of the transaction. However, the memory context named GangContext where CurrentGangCreating was created may be reset before CurrentGangCreating is actually destroyed and a SIGSEGV may occur. So this commit makes sure that CurrentGangCreating is destroyed ahead of other created gangs and the reset of GangContext.
-
由 Ashwin Agrawal 提交于
-
由 Kris Macoskey 提交于
-
由 Kris Macoskey 提交于
Allows the compile and icw tests for each platform to pass and fail independently of other platforms. The gate jobs were born out of necessity to handle infrastructure issues with concourse. Now that infrastructure issues have been stabilized, it's time to review the layout of the pipeline again. This commit removes the icw_start_gate job that multiplexed a passing condition from all of the compile jobs that sat in front of every icw job. This was not desirable following a longer running issue with the compilation on one platform, ubuntu16, that then blocked icw tests on the remaining platforms. Replacing the passing condition of icw_start_gate on each icw job is the corresponding compilation job for the test, based on platform. E.G. This: (blocks) (blocks) compile_gpdb_centos6 -> gate_icw_start -> icw_planner_centos6 compile_gpdb_ubuntu16 -> -> icw_planner_ubuntu16 Is now this: (blocks) compile_gpdb_centos6 -> icw_planner_centos6 (blocks) compile_gpdb_ubuntu16 -> icw_planner_ubuntu16 Signed-off-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Ashwin Agrawal 提交于
Concurrent index builds are not supoorted in greenplum. Seems there exist GUC gp_create_index_concurrently under which we still allow the creations. But still with the same for bitmap indexes CONCURRENTLY cannot be supported. This test keeps failing randomly in CI due to "WARNING: ignoring query cancel request for synchronous replication to ensure cluster consistency." This happens as during bitmap index creation it locally commits on each segment and then one of the segment errors during index build with "ERROR: CONCURRENTLY is not supported when creating bitmap indexes", issues cancelation message to other segments.
-
由 Ashwin Agrawal 提交于
Failures are seen in CI randomly where checkpoint_and_wait_for_replication_replay return false, but when check on the box later the primaries and mirrors are in sync. So, seems its timing related failures maybe sometimes containers are running slow causing delays. This test is for checking functionality and not performance, so avoiding flaky failures is better with increased number of retries, which in general shouldn't affect runtime.
-
由 Dhanashree Kashid 提交于
For queries of the form "NOT (subselect)", planner lost the "NOT" operator during the initial pull-up in pull_up_sublinks_qual_recurse() which resulted in incorrect filter and hence wrong results. Inside pull_up_sublinks_qual_recurse(), when a qual contains NOT, we check if sublink type is any one of these: EXISTS, ANY or ALL, and invoke appropriate sublink pull up routines. In case of qual of the form "NOT (SELECT <>)" the sublink type is EXPR; hence we recurse into the argument of NOT; at which point we lose the information about NOT operator. This commit fixes the issue by returning the node unmodified when the argument of NOT is an EXPR sublink. The EXPR sublink, later gets pulled up by preprocess_qual_conditions() wherein, pull_up_sublinks() is invoked again to handle sublinks in an expression. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Melanie Plageman 提交于
The pseudocols field was added to RangeTblEntry to avoid duplicates in the target list. When the field was added, a WRITE_NODE_FIELD was added in _outRangeTblEntry(), but a corresponding READ_NODE_FIELD was not added in the _readRangeTblEntry(). Additionally, this field was not added to the readfast and outfast functions either since this field was not used after the planning stage and did not have to be dispatched to the QE's. Later, in fd6741f9, a corresponding READ_NODE_FIELD was added along with a FIXME to decide whether or not to serialize/deserialize. After confirming that this field is not used after planning, we have decided to remove any serialization/deserialization of the pseudocols member of RangeTblEntry The pseudocols member of RangeTblEntry is a list of CdbRelColumnInfo. This structure was added in the same original commit. An _outCdbRelColumnInfo() was added for this structure as well. Similarly, no corresponding _readCdbRelColumnInfo() was added. This data structure is only used for making pseudocols, so we have also removed the _outCdbRelColumnInfo() function. Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
由 Taylor Vesely 提交于
The GPDB specific API for pgstat_report_waiting() accepts waiting reason unlike the upstream counterpart, which accepts only a boolean flag. Renaming the API to gpstat_report_waiting() allows us to catch new uses of the API introduced from upstream merges. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Taylor Vesely 提交于
Without this change, ps display of postmaster child processes may get mangled. E.g. postgres: 15432, gpadmin isolation2test [local] con14 cmd52 con14 cm?~??????X??? This change uses the GPDB specific function get_real_act_ps_display() to get the ps display string before it is modified. This change also sets Gp_role of FTS daemon process to utility instead of the default value of dispatch. That prevents appending "conXXX" to FTS daemon's ps display. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Todd Sedano 提交于
Add missing sudo commands [ci skip]
-
由 Todd Sedano 提交于
[ci skip]
-
由 Ashwin Agrawal 提交于
To help debug the issue, PANIC incase encounter this shouldn't happen case.
-