- 22 5月, 2018 1 次提交
-
-
由 Douwe Maan 提交于
-
- 18 5月, 2018 7 次提交
-
-
由 André Luís 提交于
-
由 Lukas Eipert 提交于
-
由 Murat Dogan 提交于
-
由 Robert Speicher 提交于
-
由 Robert Speicher 提交于
-
由 Jacopo 提交于
-
由 Harish Ved 提交于
-
- 17 5月, 2018 13 次提交
-
-
由 Dmitriy Zaporozhets 提交于
* Reset p bottom margin for group lists to fix vertical alignment * Remove double border for group lists to be consistent with project lists Signed-off-by: NDmitriy Zaporozhets <dmitriy.zaporozhets@gmail.com>
-
由 Yorick Peterse 提交于
When displaying a project's pipelines (Projects::PipelinesController#index) we now exclude the coverage data. This data was not used by the frontend, yet getting it would require one SQL query per pipeline. These queries in turn could be quite expensive on GitLab.com.
-
由 Yorick Peterse 提交于
When displaying the pipelines of a project we now preload the following data: 1. Authors of the commits that belong to these pipelines 2. The number of warnings per pipeline, which is used by Ci::Pipeline#has_warnings? == Commit Authors Previously this data was queried for every Commit separately, leading to 20 SQL queries being executed in the worst case. With an average of 3 to 5 milliseconds per SQL query this could result in 100 milliseconds being spent in _just_ getting Commit authors. To preload this data Commit#author now uses BatchLoader (through Commit#lazy_author), and a separate module Gitlab::Ci::Pipeline::Preloader is used to ensure all authors are loaded before they are used. == Number of warnings This changes Ci::Pipeline#has_warnings? so it supports preloading of the number of warnings per pipeline. This removes the need for executing a COUNT(*) query for every pipeline just to see if it has any warnings or not.
-
由 Yorick Peterse 提交于
When displaying the project pipelines dashboard we display a few tabs for different pipeline states. For every such tab we count the number of pipelines that belong to it. For large projects such as GitLab CE this means having to count over 80 000 rows, which can easily take between 70 and 100 milliseconds per query. To improve this we apply a technique we already use for search results: we limit the number of rows to count. The current limit is 1000, which means that if more than 1000 rows are present for a state we will show "1000+" instead of the exact number. The SQL queries used for this perform much better than a regular COUNT, even when a project has a lot of pipelines. Prior to these changes we would end up running a query like this: SELECT COUNT(*) FROM ci_pipelines WHERE project_id = 13083 AND status IN ('success', 'failed', 'canceled') This would produce a plan along the lines of the following: Aggregate (cost=3147.55..3147.56 rows=1 width=8) (actual time=501.413..501.413 rows=1 loops=1) Buffers: shared hit=17116 read=861 dirtied=2 -> Index Only Scan using index_ci_pipelines_on_project_id_and_ref_and_status_and_id on ci_pipelines (cost=0.56..2984.14 rows=65364 width=0) (actual time=0.095..490.263 rows=80388 loops=1) Index Cond: (project_id = 13083) Filter: ((status)::text = ANY ('{success,failed,canceled}'::text[])) Rows Removed by Filter: 2894 Heap Fetches: 353 Buffers: shared hit=17116 read=861 dirtied=2 Planning time: 1.409 ms Execution time: 501.519 ms Using the LIMIT count technique we instead run the following query: SELECT COUNT(*) FROM ( SELECT 1 FROM ci_pipelines WHERE project_id = 13083 AND status IN ('success', 'failed', 'canceled') LIMIT 1001 ) for_count This query produces the following plan: Aggregate (cost=58.77..58.78 rows=1 width=8) (actual time=1.726..1.727 rows=1 loops=1) Buffers: shared hit=169 read=15 -> Limit (cost=0.56..46.25 rows=1001 width=4) (actual time=0.164..1.570 rows=1001 loops=1) Buffers: shared hit=169 read=15 -> Index Only Scan using index_ci_pipelines_on_project_id_and_ref_and_status_and_id on ci_pipelines (cost=0.56..2984.14 rows=65364 width=4) (actual time=0.162..1.426 rows=1001 loops=1) Index Cond: (project_id = 13083) Filter: ((status)::text = ANY ('{success,failed,canceled}'::text[])) Rows Removed by Filter: 9 Heap Fetches: 10 Buffers: shared hit=169 read=15 Planning time: 1.832 ms Execution time: 1.821 ms While this query still uses a Filter for the "status" field the number of rows that it may end up filtering (at most 1001) is small enough that an additional index does not appear to be necessary at this time. See https://gitlab.com/gitlab-org/gitlab-ce/issues/43132#note_68659234 for more information.
-
由 Filipa Lacerda 提交于
-
由 Filipa Lacerda 提交于
-
由 🙈 jacopo beschi 🙉 提交于
-
由 Mayra Cabrera 提交于
Also moves the assertions were they belong
-
由 Rémy Coutable 提交于
Signed-off-by: NRémy Coutable <remy@rymai.me>
-
由 Annabel Dunstone Gray 提交于
-
由 Rémy Coutable 提交于
Signed-off-by: NRémy Coutable <remy@rymai.me>
-
由 Rémy Coutable 提交于
Signed-off-by: NRémy Coutable <remy@rymai.me>
-
由 haseeb 提交于
-
- 16 5月, 2018 19 次提交
-
-
由 Stan Hu 提交于
Uses PostgreSQL tuple estimates to provide a much faster yet approximate count. See https://wiki.postgresql.org/wiki/Slow_Counting for more details. We only use this fast method if the table has been analyzed or vacuumed within the last hour. Closes #46255
-
由 Filipa Lacerda 提交于
-
由 Lukas Eipert 提交于
-
由 Lars Greiss 提交于
-
由 Zeger-Jan van de Weg 提交于
Prior to this change, this was done through unicorn. In theory this could time out. Workhorse has been sending these raw patches and diffs for a long time and is stable in doing so. Added bonus is the fact that `Commit#to_patch` can be removed. `Commit#to_diff` too, which closes https://gitlab.com/gitlab-org/gitaly/issues/324 Closes https://gitlab.com/gitlab-org/gitaly/issues/1196
-
由 Dylan Griffith 提交于
-
由 Dylan Griffith 提交于
-
由 Dylan Griffith 提交于
-
由 Dylan Griffith 提交于
-
由 Dylan Griffith 提交于
-
由 Dylan Griffith 提交于
-
由 Dmitriy Zaporozhets 提交于
-
由 Jan Provaznik 提交于
-
由 Jan Provaznik 提交于
-
由 Jan Provaznik 提交于
destroy_all loads all records at once
-
由 Jan Provaznik 提交于
ObjectStore uploader requires presence of associated `uploads` record when deleting the upload file (through the carrierwave's after_commit hook) because we keep info whether file is LOCAL or REMOTE in `upload` object. For this reason we can not destroy uploads as "dependent: :destroy" hook because these would be deleted too soon. Instead we rely on carrierwave's hook to destroy `uploads` in after_commit hook. But in before_destroy hook we still have to delete not-mounted uploads (which don't use carrierwave's destroy hook). This has to be done in before_Destroy instead of after_commit because `FileUpload` requires existence of model's object on destroy action. This is not ideal state of things, in a next step we should investigate how to unify model dependencies so we can use same workflow for all uploads. Related to #45425
-
由 Paul Slaughter 提交于
-
由 Paul Vorbach 提交于
-
由 Paul Vorbach 提交于
-