1. 17 5月, 2018 1 次提交
    • Y
      Limit the number of pipelines to count · 70985aa1
      Yorick Peterse 提交于
      When displaying the project pipelines dashboard we display a few tabs
      for different pipeline states. For every such tab we count the number of
      pipelines that belong to it. For large projects such as GitLab CE this
      means having to count over 80 000 rows, which can easily take between 70
      and 100 milliseconds per query.
      
      To improve this we apply a technique we already use for search results:
      we limit the number of rows to count. The current limit is 1000, which
      means that if more than 1000 rows are present for a state we will show
      "1000+" instead of the exact number. The SQL queries used for this
      perform much better than a regular COUNT, even when a project has a lot
      of pipelines.
      
      Prior to these changes we would end up running a query like this:
      
          SELECT COUNT(*)
          FROM ci_pipelines
          WHERE project_id = 13083
          AND status IN ('success', 'failed', 'canceled')
      
      This would produce a plan along the lines of the following:
      
          Aggregate  (cost=3147.55..3147.56 rows=1 width=8) (actual time=501.413..501.413 rows=1 loops=1)
            Buffers: shared hit=17116 read=861 dirtied=2
            ->  Index Only Scan using index_ci_pipelines_on_project_id_and_ref_and_status_and_id on ci_pipelines  (cost=0.56..2984.14 rows=65364 width=0) (actual time=0.095..490.263 rows=80388 loops=1)
                  Index Cond: (project_id = 13083)
                  Filter: ((status)::text = ANY ('{success,failed,canceled}'::text[]))
                  Rows Removed by Filter: 2894
                  Heap Fetches: 353
                  Buffers: shared hit=17116 read=861 dirtied=2
          Planning time: 1.409 ms
          Execution time: 501.519 ms
      
      Using the LIMIT count technique we instead run the following query:
      
          SELECT COUNT(*)
          FROM (
              SELECT 1
              FROM ci_pipelines
              WHERE project_id = 13083
              AND status IN ('success', 'failed', 'canceled')
              LIMIT 1001
          ) for_count
      
      This query produces the following plan:
      
          Aggregate  (cost=58.77..58.78 rows=1 width=8) (actual time=1.726..1.727 rows=1 loops=1)
            Buffers: shared hit=169 read=15
            ->  Limit  (cost=0.56..46.25 rows=1001 width=4) (actual time=0.164..1.570 rows=1001 loops=1)
                  Buffers: shared hit=169 read=15
                  ->  Index Only Scan using index_ci_pipelines_on_project_id_and_ref_and_status_and_id on ci_pipelines  (cost=0.56..2984.14 rows=65364 width=4) (actual time=0.162..1.426 rows=1001 loops=1)
                        Index Cond: (project_id = 13083)
                        Filter: ((status)::text = ANY ('{success,failed,canceled}'::text[]))
                        Rows Removed by Filter: 9
                        Heap Fetches: 10
                        Buffers: shared hit=169 read=15
          Planning time: 1.832 ms
          Execution time: 1.821 ms
      
      While this query still uses a Filter for the "status" field the number
      of rows that it may end up filtering (at most 1001) is small enough that
      an additional index does not appear to be necessary at this time.
      
      See https://gitlab.com/gitlab-org/gitlab-ce/issues/43132#note_68659234
      for more information.
      70985aa1
  2. 03 5月, 2018 1 次提交
  3. 02 5月, 2018 1 次提交
  4. 19 12月, 2017 1 次提交
    • Z
      Load commit in batches for pipelines#index · c6edae38
      Zeger-Jan van de Weg 提交于
      Uses `list_commits_by_oid` on the CommitService, to request the needed
      commits for pipelines. These commits are needed to display the user that
      created the commit and the commit title.
      
      This includes fixes for tests failing that depended on the commit
      being `nil`. However, now these are batch loaded, this doesn't happen
      anymore and the commits are an instance of BatchLoader.
      c6edae38
  5. 27 10月, 2017 1 次提交
    • Z
      Cache commits on the repository model · 3411fef1
      Zeger-Jan van de Weg 提交于
      Now, when requesting a commit from the Repository model, the results are
      not cached. This means we're fetching the same commit by oid multiple times
      during the same request. To prevent us from doing this, we now cache
      results. Caching is done only based on object id (aka SHA).
      
      Given we cache on the Repository model, results are scoped to the
      associated project, eventhough the change of two repositories having the
      same oids for different commits is small.
      3411fef1
  6. 20 10月, 2017 1 次提交
  7. 16 10月, 2017 1 次提交
  8. 03 10月, 2017 1 次提交
  9. 03 8月, 2017 1 次提交
  10. 02 8月, 2017 1 次提交
  11. 18 7月, 2017 1 次提交
  12. 04 7月, 2017 1 次提交
  13. 13 6月, 2017 1 次提交
  14. 10 6月, 2017 1 次提交
  15. 17 5月, 2017 1 次提交
    • Z
      Improve pipeline size for query limit test · 63da9172
      Z.J. van de Weg 提交于
      The pipeline was quite meagre in both stages and the number of groups.
      This has been improved. Performance is not yet optimal, but to limit
      this from sliding further this slippery slope, a hard limit has been
      set.
      63da9172
  16. 07 5月, 2017 1 次提交
  17. 05 5月, 2017 1 次提交
  18. 22 4月, 2017 1 次提交
  19. 17 4月, 2017 1 次提交
  20. 13 4月, 2017 1 次提交
  21. 23 3月, 2017 6 次提交
  22. 24 2月, 2017 1 次提交
  23. 23 2月, 2017 1 次提交
  24. 17 2月, 2017 1 次提交
  25. 21 12月, 2016 2 次提交
  26. 20 12月, 2016 1 次提交