- 03 12月, 2017 5 次提交
-
-
由 Zeger-Jan van de Weg 提交于
-
由 Zeger-Jan van de Weg 提交于
-
由 Zeger-Jan van de Weg 提交于
Two things at ones, as there was no clean way to seperate the commit and give me feedback from the tests. But the model Artifact is now JobArtifact, and the table does not have a type anymore, but the metadata is now its own model: Ci::JobArtifactMetadata.
-
由 Zeger-Jan van de Weg 提交于
To allow jobs/builds to have multiple artifacts, and to start seperating concerns from Ci::Build a new model is created: Ci::Artifact. Changes include the updating of the ArtifactUploader to adapt to a slightly different interface. The uploader expects to be initialized with a `Ci::Build`. Futher a migration with the minimal fields, the needed foreign keys and an index. Last, the way this works is by prepending a module to Ci::Build so we can basically override behaviour but if needed use `super` to get the original behaviour.
-
由 Zeger-Jan van de Weg 提交于
-
- 29 11月, 2017 4 次提交
-
-
由 Bob Van Landuyt 提交于
Rescheduling will make sure the fork networks with a deleted source project are created.
-
由 Andrew Newdigate 提交于
-
由 Sean McGivern 提交于
For getting the SHAs from an MR to find pipelines, we get the last 100 MR diffs for the MR, and find commits from those. This was un-indexed before, because the index was not a composite index on merge_request_diff_id, id. Changing that means that this scope can exclusively use indexes.
-
由 Sean McGivern 提交于
The st_commits and st_diffs columns on merge_request_diffs historically held the YAML-serialised data for a merge request diff, in a variety of formats. Since 9.5, these have been migrated in the background to two new tables: merge_request_diff_commits and merge_request_diff_files. That has the advantage that we can actually query the data (for instance, to find out how many commits we've stored), and that it can't be in a variety of formats, but must match the new schema. This is the final step of that journey, where we drop those columns and remove all references to them. This is a breaking change to the importer, because we can no longer import diffs created in the old format, and we cannot guarantee the export will be in the new format unless it was generated after this commit.
-
- 24 11月, 2017 3 次提交
-
-
由 Yorick Peterse 提交于
This ensures that merge_requests.state and merge_requests.merge_status both have a proper default value and NOT NULL constraint on database level. We also make sure to update any bogus rows first, without blowing up the database. Fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/40534
-
由 Pawel Chojnacki 提交于
-
由 Pawel Chojnacki 提交于
-
- 23 11月, 2017 1 次提交
-
-
由 Markus Koller 提交于
-
- 22 11月, 2017 2 次提交
-
-
由 Shinya Maeda 提交于
-
由 Yorick Peterse 提交于
This updates the composite index on ci_pipelines (project_id, ref, status) to also include the "id" column at the end. Adding this column to the index drastically improves the performance of queries used for getting the latest pipeline for a particular branch. For example, on project dashboards we'll run a query like the following: SELECT ci_pipelines.* FROM ci_pipelines WHERE ci_pipelines.project_id = 13083 AND ci_pipelines.ref = 'master' AND ci_pipelines.status = 'success' ORDER BY ci_pipelines.id DESC LIMIT 1; Limit (cost=0.43..58.88 rows=1 width=224) (actual time=26.956..26.956 rows=1 loops=1) Buffers: shared hit=6544 dirtied=16 -> Index Scan Backward using ci_pipelines_pkey on ci_pipelines (cost=0.43..830922.89 rows=14216 width=224) (actual time=26.954..26.954 rows=1 loops=1) Filter: ((project_id = 13083) AND ((ref)::text = 'master'::text) AND ((status)::text = 'success'::text)) Rows Removed by Filter: 6476 Buffers: shared hit=6544 dirtied=16 Planning time: 1.484 ms Execution time: 27.000 ms Because of the lack of "id" in the index we end up scanning over the primary key index, then applying a filter to filter out any remaining rows. The more pipelines a GitLab instance has the slower this will get. By adding "id" to the mentioned composite index we can change the above plan into the following: Limit (cost=0.56..2.01 rows=1 width=224) (actual time=0.034..0.034 rows=1 loops=1) Buffers: shared hit=5 -> Index Scan Backward using yorick_test on ci_pipelines (cost=0.56..16326.37 rows=11243 width=224) (actual time=0.033..0.033 rows=1 loops=1) Index Cond: ((project_id = 13083) AND ((ref)::text = 'master'::text) AND ((status)::text = 'success'::text)) Buffers: shared hit=5 Planning time: 0.695 ms Execution time: 0.061 ms This in turn leads to a best-case improvement of roughly 25 milliseconds, give or take a millisecond or two.
-
- 20 11月, 2017 1 次提交
-
-
由 Yorick Peterse 提交于
This adds various foreign keys and indexes to the "merge_requests" table as outlined in https://gitlab.com/gitlab-org/gitlab-ce/issues/31825. Fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/31825
-
- 17 11月, 2017 2 次提交
-
-
由 Bob Van Landuyt 提交于
-
由 Michael Kozono 提交于
-
- 10 11月, 2017 1 次提交
-
-
由 Yorick Peterse 提交于
This adds various foreign key constraints, indexes, missing NOT NULL constraints, and changes some column types from timestamp to timestamptz. Fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/31811
-
- 08 11月, 2017 2 次提交
-
-
由 Kamil Trzcinski 提交于
-
由 Shinya Maeda 提交于
-
- 07 11月, 2017 1 次提交
-
-
由 Kamil Trzcinski 提交于
-
- 06 11月, 2017 4 次提交
-
-
由 Alessio Caiazza 提交于
-
由 Markus Koller 提交于
-
由 Markus Koller 提交于
-
由 Kamil Trzcinski 提交于
-
- 03 11月, 2017 3 次提交
-
-
由 micael.bergeron 提交于
also, I refactored the MergeRequest#fetch_ref method to express the side-effect that this method has. MergeRequest#fetch_ref -> MergeRequest#fetch_ref! Repository#fetch_source_branch -> Repository#fetch_source_branch!
-
由 Shinya Maeda 提交于
-
由 Sean McGivern 提交于
We already had this the other way around (merge_request_diffs.merge_request_id), but this is needed to gather only the most recent diffs for a set of merge requests.
-
- 02 11月, 2017 3 次提交
-
-
由 Shinya Maeda 提交于
Fix out of sync with KubernetesService. Remove namespace pramas from controller. Use time_with_zone in schema. Remove Gcp::Clusters from safe_model_attributes.ym
-
由 Kamil Trzcinski 提交于
-
由 Douwe Maan 提交于
-
- 01 11月, 2017 3 次提交
-
-
由 Kamil Trzcinski 提交于
-
由 Kamil Trzcinski 提交于
-
由 Shinya Maeda 提交于
-
- 23 10月, 2017 2 次提交
-
-
由 Bob Van Landuyt 提交于
-
由 Shinya Maeda 提交于
-
- 17 10月, 2017 1 次提交
-
-
由 Bob Van Landuyt 提交于
-
- 13 10月, 2017 1 次提交
-
-
由 Vlad 提交于
-
- 07 10月, 2017 1 次提交
-
-
由 Bob Van Landuyt 提交于
When no fork network exists for the source projects, we create a new one with the correct source
-