PostgreSQL

This page contains information about PostgreSQL the GitLab Support team uses when troubleshooting. GitLab makes this information public, so that anyone can make use of the Support team's collected knowledge.

WARNING: Some procedures documented here may break your GitLab instance. Use at your own risk.

If you're on a paid tier and aren't sure how to use these commands, contact Support for assistance with any issues you're having.

Other GitLab PostgreSQL documentation

This section is for links to information elsewhere in the GitLab documentation.

Procedures

Troubleshooting/Fixes

  • GitLab database requirements, including

    • Support for MySQL was removed in GitLab 12.1; migrate to PostgreSQL.
    • Required extension: pg_trgm
    • Required extension: btree_gist
  • Errors like this in the production/sidekiq log; see: Set default_transaction_isolation into read committed:

    ActiveRecord::StatementInvalid PG::TRSerializationFailure: ERROR:  could not serialize access due to concurrent update
  • PostgreSQL HA replication slot errors:

    pg_basebackup: could not create temporary replication slot "pg_basebackup_12345": ERROR:  all replication slots are in use
    HINT:  Free one or increase max_replication_slots.
  • Geo replication errors including:

    ERROR: replication slots can only be used if max_replication_slots > 0
    
    FATAL: could not start WAL streaming: ERROR: replication slot “geo_secondary_my_domain_com” does not exist
    
    Command exceeded allowed execution time
    
    PANIC: could not write to file ‘pg_xlog/xlogtemp.123’: No space left on device
  • Checking Geo configuration, including:

    • Reconfiguring hosts/ports.
    • Checking and fixing user/password mappings.
  • Common Geo errors.

Support topics

Database deadlocks

References:

ERROR: deadlock detected

Three applicable timeouts are identified in the issue #1; our recommended settings are as follows:

deadlock_timeout = 5s
statement_timeout = 15s
idle_in_transaction_session_timeout = 60s

Quoting from issue #1:

"If a deadlock is hit, and we resolve it through aborting the transaction after a short period, then the retry mechanisms we already have will make the deadlocked piece of work try again, and it's unlikely we'll deadlock multiple times in a row."

NOTE: In Support, our general approach to reconfiguring timeouts (applies also to the HTTP stack) is that it's acceptable to do it temporarily as a workaround. If it makes GitLab usable for the customer, then it buys time to understand the problem more completely, implement a hot fix, or make some other change that addresses the root cause. Generally, the timeouts should be put back to reasonable defaults after the root cause is resolved.

In this case, the guidance we had from development was to drop deadlock_timeout or statement_timeout, but to leave the third setting at 60s. Setting idle_in_transaction protects the database from sessions potentially hanging for days. There's more discussion in the issue relating to introducing this timeout on GitLab.com.

PostgresSQL defaults:

  • statement_timeout = 0 (never)
  • idle_in_transaction_session_timeout = 0 (never)

Comments in issue #1 indicate that these should both be set to at least a number of minutes for all Omnibus GitLab installations (so they don't hang indefinitely). However, 15s for statement_timeout is very short, and will only be effective if the underlying infrastructure is very performant.

See current settings with:

sudo gitlab-rails runner "c = ApplicationRecord.connection ; puts c.execute('SHOW statement_timeout').to_a ;
puts c.execute('SHOW lock_timeout').to_a ;
puts c.execute('SHOW idle_in_transaction_session_timeout').to_a ;"

It may take a little while to respond.

{"statement_timeout"=>"1min"}
{"lock_timeout"=>"0"}
{"idle_in_transaction_session_timeout"=>"1min"}

NOTE: These are Omnibus GitLab settings. If an external database, such as a customer's PostgreSQL installation or Amazon RDS is being used, these values don't get set, and would have to be set externally.