1. 05 1月, 2018 1 次提交
    • J
      Backport 'Rebase' feature from EE to CE · 27a75ea1
      Jan Provaznik 提交于
      When a project uses fast-forward merging strategy user has
      to rebase MRs to target branch before it can be merged.
      Now user can do rebase in UI by clicking 'Rebase' button
      instead of doing rebase locally.
      
      This feature was already present in EE, this is only backport
      of the feature to CE. Couple of changes:
      * removed rebase license check
      * renamed migration (changed timestamp)
      
      Closes #40301
      27a75ea1
  2. 03 1月, 2018 2 次提交
  3. 02 1月, 2018 1 次提交
  4. 31 12月, 2017 1 次提交
    • M
      User#projects_limit remove DB default and added NOT NULL constraint · 75cf5f5b
      Mario de la Ossa 提交于
      This change is required because otherwise if a user is created with a
      value for `projects_limit` that matches the DB default, it gets
      overwritten by `current_application_settings.default_projects_limit`. By
      removing the default we once again can allow a user to be created with a
      limit of 10 projects without the risk that it'll change to 10000
      75cf5f5b
  5. 23 12月, 2017 1 次提交
  6. 22 12月, 2017 1 次提交
  7. 21 12月, 2017 1 次提交
  8. 13 12月, 2017 1 次提交
  9. 11 12月, 2017 1 次提交
  10. 09 12月, 2017 1 次提交
  11. 08 12月, 2017 1 次提交
    • B
      Move the circuitbreaker check out in a separate process · f1ae1e39
      Bob Van Landuyt 提交于
      Moving the check out of the general requests, makes sure we don't have
      any slowdown in the regular requests.
      
      To keep the process performing this checks small, the check is still
      performed inside a unicorn. But that is called from a process running
      on the same server.
      
      Because the checks are now done outside normal request, we can have a
      simpler failure strategy:
      
      The check is now performed in the background every
      `circuitbreaker_check_interval`. Failures are logged in redis. The
      failures are reset when the check succeeds. Per check we will try
      `circuitbreaker_access_retries` times within
      `circuitbreaker_storage_timeout` seconds.
      
      When the number of failures exceeds
      `circuitbreaker_failure_count_threshold`, we will block access to the
      storage.
      
      After `failure_reset_time` of no checks, we will clear the stored
      failures. This could happen when the process that performs the checks
      is not running.
      f1ae1e39
  12. 07 12月, 2017 2 次提交
  13. 05 12月, 2017 4 次提交
  14. 03 12月, 2017 6 次提交
  15. 02 12月, 2017 4 次提交
  16. 29 11月, 2017 4 次提交
    • B
      Reschedule the migration to populate fork networks · e03d4a20
      Bob Van Landuyt 提交于
      Rescheduling will make sure the fork networks with a deleted source
      project are created.
      e03d4a20
    • A
      Add timeouts for Gitaly calls · 64e5f996
      Andrew Newdigate 提交于
      64e5f996
    • S
      Improve indexes on merge_request_diffs · 484ae2ee
      Sean McGivern 提交于
      For getting the SHAs from an MR to find pipelines, we get the last 100 MR diffs
      for the MR, and find commits from those. This was un-indexed before, because the
      index was not a composite index on merge_request_diff_id, id. Changing that
      means that this scope can exclusively use indexes.
      484ae2ee
    • S
      Remove serialised diff and commit columns · 4ebbfe5d
      Sean McGivern 提交于
      The st_commits and st_diffs columns on merge_request_diffs historically held the
      YAML-serialised data for a merge request diff, in a variety of formats.
      
      Since 9.5, these have been migrated in the background to two new tables:
      merge_request_diff_commits and merge_request_diff_files. That has the advantage
      that we can actually query the data (for instance, to find out how many commits
      we've stored), and that it can't be in a variety of formats, but must match the
      new schema.
      
      This is the final step of that journey, where we drop those columns and remove
      all references to them. This is a breaking change to the importer, because we
      can no longer import diffs created in the old format, and we cannot guarantee
      the export will be in the new format unless it was generated after this commit.
      4ebbfe5d
  17. 24 11月, 2017 3 次提交
  18. 23 11月, 2017 1 次提交
  19. 22 11月, 2017 2 次提交
    • S
      Add environment_scope to cluster table · 98bb78a4
      Shinya Maeda 提交于
      98bb78a4
    • Y
      Update composite pipelines index to include "id" · aafe5c12
      Yorick Peterse 提交于
      This updates the composite index on ci_pipelines (project_id, ref,
      status) to also include the "id" column at the end. Adding this column
      to the index drastically improves the performance of queries used for
      getting the latest pipeline for a particular branch. For example, on
      project dashboards we'll run a query like the following:
      
          SELECT ci_pipelines.*
          FROM ci_pipelines
          WHERE ci_pipelines.project_id = 13083
          AND ci_pipelines.ref = 'master'
          AND ci_pipelines.status = 'success'
          ORDER BY ci_pipelines.id DESC
          LIMIT 1;
      
          Limit  (cost=0.43..58.88 rows=1 width=224) (actual time=26.956..26.956 rows=1 loops=1)
            Buffers: shared hit=6544 dirtied=16
            ->  Index Scan Backward using ci_pipelines_pkey on ci_pipelines  (cost=0.43..830922.89 rows=14216 width=224) (actual time=26.954..26.954 rows=1 loops=1)
                  Filter: ((project_id = 13083) AND ((ref)::text = 'master'::text) AND ((status)::text = 'success'::text))
                  Rows Removed by Filter: 6476
                  Buffers: shared hit=6544 dirtied=16
          Planning time: 1.484 ms
          Execution time: 27.000 ms
      
      Because of the lack of "id" in the index we end up scanning over the
      primary key index, then applying a filter to filter out any remaining
      rows. The more pipelines a GitLab instance has the slower this will get.
      
      By adding "id" to the mentioned composite index we can change the above
      plan into the following:
      
          Limit  (cost=0.56..2.01 rows=1 width=224) (actual time=0.034..0.034 rows=1 loops=1)
            Buffers: shared hit=5
            ->  Index Scan Backward using yorick_test on ci_pipelines  (cost=0.56..16326.37 rows=11243 width=224) (actual time=0.033..0.033 rows=1 loops=1)
                  Index Cond: ((project_id = 13083) AND ((ref)::text = 'master'::text) AND ((status)::text = 'success'::text))
                  Buffers: shared hit=5
          Planning time: 0.695 ms
          Execution time: 0.061 ms
      
      This in turn leads to a best-case improvement of roughly 25
      milliseconds, give or take a millisecond or two.
      aafe5c12
  20. 20 11月, 2017 1 次提交
  21. 17 11月, 2017 1 次提交