- 02 1月, 2019 1 次提交
-
-
由 Douwe Maan 提交于
-
- 22 12月, 2018 1 次提交
-
-
- 19 12月, 2018 3 次提交
-
-
由 Jarka Košanová 提交于
- we now use the hierarchy class also for epics - also rename supports_nested_groups? into supports_nested_objects? - move it to a concern
-
由 Zeger-Jan van de Weg 提交于
This action doesn't lean on reduplication, so a short call can me made to the Gitaly server to have the object pool remove its remote to the project pending deletion. https://gitlab.com/gitlab-org/gitaly/blob/f6cd55357/internal/git/objectpool/link.go#L58 When an object pool doesn't have members, this would invalidate the need for a pool. So when a project leaves the pool, the pool will be destroyed on the background. Fixes: https://gitlab.com/gitlab-org/gitaly/issues/1415
-
由 Francisco Javier López 提交于
Removing the pipeline_ci_sources_only feature flag introduced in https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23353
-
- 13 12月, 2018 1 次提交
-
-
由 Francisco Javier López 提交于
-
- 12 12月, 2018 1 次提交
-
-
由 Zeger-Jan van de Weg 提交于
-
- 11 12月, 2018 1 次提交
-
-
由 Yorick Peterse 提交于
In https://gitlab.com/gitlab-org/release/framework/issues/28 we found that this method was changed a lot over the years: 43 times if our calculations were correct. Looking at the method, it had quite a few branches going on: def create_or_update_import_data(data: nil, credentials: nil) return if data.nil? && credentials.nil? project_import_data = import_data || build_import_data if data project_import_data.data ||= {} project_import_data.data = project_import_data.data.merge(data) end if credentials project_import_data.credentials ||= {} project_import_data.credentials = project_import_data.credentials.merge(credentials) end project_import_data end If we turn the || and ||= operators into regular if statements, we can see a bit more clearly that this method has quite a lot of branches in it: def create_or_update_import_data(data: nil, credentials: nil) if data.nil? && credentials.nil? return else project_import_data = if import_data import_data else build_import_data end if data if project_import_data.data # nothing else project_import_data.data = {} end project_import_data.data = project_import_data.data.merge(data) end if credentials if project_import_data.credentials # nothing else project_import_data.credentials = {} end project_import_data.credentials = project_import_data.credentials.merge(credentials) end project_import_data end end The number of if statements and branches here makes it easy to make mistakes. To resolve this, we refactor this code in such a way that we can get rid of all but the first `if data.nil? && credentials.nil?` statement. We can do this by simply sending `to_h` to `nil` in the right places, which removes the need for statements such as `if data`. Since this data gets written to a database, in ProjectImportData we do make sure to not write empty Hash values. This requires an `unless` (which is really a `if !`), but the resulting code is still very easy to read.
-
- 09 12月, 2018 14 次提交
-
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
This implements Repository#ambiguous_ref? and checks if a ref is ambiguous before trying to resolve the ref in Project#protected_for?
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
-
由 Matija Čupić 提交于
Reworks Project#resolve_ref to return Gitlab::Git::Branch, Gitlab::Git::Tag or raise an AmbiguousRef error.
-
由 Matija Čupić 提交于
-
- 08 12月, 2018 1 次提交
-
-
由 Zeger-Jan van de Weg 提交于
When a project is forked, the new repository used to be a deep copy of everything stored on disk by leveraging `git clone`. This works well, and makes isolation between repository easy. However, the clone is at the start 100% the same as the origin repository. And in the case of the objects in the object directory, this is almost always going to be a lot of duplication. Object Pools are a way to create a third repository that essentially only exists for its 'objects' subdirectory. This third repository's object directory will be set as alternate location for objects. This means that in the case an object is missing in the local repository, git will look in another location. This other location is the object pool repository. When Git performs garbage collection, it's smart enough to check the alternate location. When objects are duplicated, it will allow git to throw one copy away. This copy is on the local repository, where to pool remains as is. These pools have an origin location, which for now will always be a repository that itself is not a fork. When the root of a fork network is forked by a user, the fork still clones the full repository. Async, the pool repository will be created. Either one of these processes can be done earlier than the other. To handle this race condition, the Join ObjectPool operation is idempotent. Given its idempotent, we can schedule it twice, with the same effect. To accommodate the holding of state two migrations have been added. 1. Added a state column to the pool_repositories column. This column is managed by the state machine, allowing for hooks on transitions. 2. pool_repositories now has a source_project_id. This column in convenient to have for multiple reasons: it has a unique index allowing the database to handle race conditions when creating a new record. Also, it's nice to know who the host is. As that's a short link to the fork networks root. Object pools are only available for public project, which use hashed storage and when forking from the root of the fork network. (That is, the project being forked from itself isn't a fork) In this commit message I use both ObjectPool and Pool repositories, which are alike, but different from each other. ObjectPool refers to whatever is on the disk stored and managed by Gitaly. PoolRepository is the record in the database.
-
- 07 12月, 2018 2 次提交
-
-
由 Steve Azzopardi 提交于
Add a new endpoint `projects/:id/jobs/artifacts/:ref_name/raw/*artifact_path?job=name` which is the close the web URL for consistency sake. This endpoint can be used to download a single file from artifacts for the specified ref and job. closes https://gitlab.com/gitlab-org/gitlab-ce/issues/54626
-
由 Nick Thomas 提交于
-
- 06 12月, 2018 1 次提交
-
-
由 James Lopez 提交于
Resolve "Can add an existing group member into a group project with new permissions but permissions are not overridden"
-
- 05 12月, 2018 8 次提交
-
-
由 Francisco Javier López 提交于
-
由 Grzegorz Bizon 提交于
-
由 Ash McKenzie 提交于
For CE, #lfs_http_url_to_repo calls #http_url_to_repo where as for EE we examine for a Geo setup so we can support push to secondary for LFS.
-
由 Thong Kuah 提交于
With this MR, group clusters is now functional, so default to enabled. Have a single setting on the root ancestor group to enabled or disable group clusters feature as a whole
-
由 Thong Kuah 提交于
- Rename ordered_group_clusters_for_project -> ancestor_clusters_for_clusterable - Improve name of order option. It makes much more sense to have `hierarchy_order: :asc` and `hierarchy_order: :desc` - Allow ancestor_clusters_for_clusterable for group - Re-use code already present in Project
-
由 Thong Kuah 提交于
AFAIK the only relevant place is Projects::CreateService, this gets called when user creates a new project, forks a new project and does those things via the api. Also create k8s namespace for new group hierarchy when transferring project between groups Uses new Refresh service to create k8s namespaces - Ensure we use Cluster#cluster_project If a project has multiple clusters (EE), using Project#cluster_project is not guaranteed to return the cluster_project for this cluster. So switch to using Cluster#cluster_project - at this stage a cluster can only have 1 cluster_project. Also, remove rescue so that sidekiq can retry
-
由 Thong Kuah 提交于
This returns a union of the project level clusters and group level clusters associated with this project.
-
由 Thong Kuah 提交于
kubernetes_namespaces is not needed for project import/export as it tracks internal state of kubernetes integration
-
- 03 12月, 2018 1 次提交
-
-
由 Grzegorz Bizon 提交于
-
- 30 11月, 2018 1 次提交
-
-
由 Toon Claes 提交于
-
- 28 11月, 2018 1 次提交
-
-
由 Tiago Botelho 提交于
Caches repository.path into Repository#readme_path
-
- 27 11月, 2018 2 次提交
-
-
由 Tiago Botelho 提交于
Clears the import related columns and code from the Project model over to the ProjectImportState model
-
由 Gabriel Mazetto 提交于
This approach caused many different problems as we tightened the query execution timeout.
-
- 26 11月, 2018 1 次提交
-
-
由 Bob Van Landuyt 提交于
Use shelling out to git to write refs instead of rugged, hoping to avoid creating invalid refs. To update HEAD we switched to using `git symbolic-ref`.
-