- 05 3月, 2019 1 次提交
-
-
由 João Cunha 提交于
- Creates new route - Creates new controller action - Creates call stack: Clusterss::ApplciationsController calls --> Clusters::Applications::UpdateService calls --> Clusters::Applications::ScheduleUpdateService calls --> ClusterUpdateAppWorker calls --> Clusters::Applications::PatchService --> ClusterWaitForAppInstallationWorker DRY req params Adds gcp_cluster:cluster_update_app queue Schedule_update_service is uneeded Extract common logic to a parent class (UpdateService will need it) Introduce new UpdateService Fix rescue class namespace Fix RuboCop offenses Adds BaseService for create and update services Remove request_handler code duplication Fixes update command Move update_command to ApplicationCore so all apps can use it Adds tests for Knative update_command Adds specs for PatchService Raise error if update receives an unistalled app Adds update_service spec Fix RuboCop offense Use subject in favor of go Adds update endpoint specs for project namespace Adds update endpoint specs for group namespace
-
- 01 3月, 2019 4 次提交
-
-
由 Gabriel Mazetto 提交于
New class contains the ExclusiveLease specifics that is shared among both the Migration and Rollback workers.
-
由 Gabriel Mazetto 提交于
Rollback is done similar to Migration for the Hashed Storage. It also shares the same ExclusiveLease key to prevent both happening at the same time. All Hashed Storage related workers now share the same queue namespace which allows for assigning dedicated workers easily.
-
由 Gabriel Mazetto 提交于
Moved to HashedStorage namespace, and added them to the `:hashed_storage` queue namespace
-
由 Gabriel Mazetto 提交于
We are adding sidekiq workers and service classes to allow to rollback a hashed storage migration. There are some refactoring involved as well as part of the code can be reused by both the migration and the rollback logic.
-
- 27 2月, 2019 1 次提交
-
-
由 Jacopo 提交于
The API get projects/:id/traffic/fetches allows user with write access to the repository to get the number of clones for the last 30 days.
-
- 26 2月, 2019 1 次提交
-
-
由 Stan Hu 提交于
When a pipeline is for a forked merge request, we have to invalidate the ETag for both the target and source project pipelines. Before we were only invalidating the target project's pipeline.
-
- 21 2月, 2019 1 次提交
-
-
由 James Fargher 提交于
ChatOps used to be in the Ultimate tier.
-
- 15 2月, 2019 1 次提交
-
-
由 Sarah Yasonik 提交于
On reload, references to Metrics within classes in the Gitlab::Metrics module fail. Update all references to ::Gitlab::Metrics so that constant lookup finds the right module in development. This fix should not impact production.
-
- 08 2月, 2019 1 次提交
-
-
由 Thong Kuah 提交于
-
- 06 2月, 2019 2 次提交
-
-
由 Stan Hu 提交于
Use project models instead of a list of parameters.
-
由 Stan Hu 提交于
When hashed storage is in use, it's helpful to have the project name associated with the request. Closes https://gitlab.com/gitlab-org/gitaly/issues/1394
-
- 05 2月, 2019 1 次提交
-
-
由 Peter Leitzen 提交于
-
- 26 1月, 2019 3 次提交
-
-
由 Gabriel Mazetto 提交于
Specs were reviewed and improved to better cover the current behavior. There was some standardization done as well to facilitate the implementation of the rollback functionality. StorageMigratorWorker was extracted to HashedStorage namespace were RollbackerWorker will live one as well.
-
由 Gabriel Mazetto 提交于
This is part of the refactor to include a RollbackService into HashedStorage module
-
由 Gabriel Mazetto 提交于
We are keeping compatibility with existing scheduled jobs.
-
- 25 1月, 2019 1 次提交
-
-
由 Kamil Trzciński 提交于
This includes a set of APIs to manipulate container registry. This includes also an ability to delete tags based on requested criteria, like keep-last-n, matching-name, older-than.
-
- 24 1月, 2019 2 次提交
-
-
由 Rémy Coutable 提交于
Signed-off-by: NRémy Coutable <remy@rymai.me>
-
由 Shinya Maeda 提交于
Rename Introduce Destroy expired job artifacts service Revert a bit Add changelog Use expired Improve Fix spec Fix spec Use bang for destroy Introduce iteration limit Update comment Simplify more Refacor Remove unnecessary thing Fix comments Fix coding offence Make loop helper exception free
-
- 21 1月, 2019 2 次提交
-
-
由 Yorick Peterse 提交于
This refactors ExpirePipelineCacheWorker so that EE can more easily extend its logic, without having to inject code in the middle of a CE method.
-
由 Yorick Peterse 提交于
This simply moves the logic from the "perform" method into a separate "process_build" method, allowing EE to more easily extend this behaviour.
-
- 15 1月, 2019 1 次提交
-
-
由 Stan Hu 提交于
Retries in Sidekiq and in the remote mirror scheduler can cause repeated attempts in quick succession if the sync fails. Each failure will then send an e-mail to all project maintainers, which can spam users unnecessarily. Modify the logic to send one notification the first time the mirror fails by setting `error_notification_sent` to `true` and reset the flag after a successful sync. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/56222
-
- 14 1月, 2019 1 次提交
-
-
由 Zeger-Jan van de Weg 提交于
In theory the case could happen that the initial linking of the pool fails and so do all the retries that Sidekiq performs. This could lead to data loss. To prevent that case, linking is done before Gits GC too. This makes sure that case doesn't happen.
-
- 09 1月, 2019 1 次提交
-
-
由 Peter Leitzen 提交于
Enable caching for records which primary key is not `id`.
-
- 07 1月, 2019 2 次提交
-
-
由 Heinrich Lee Yu 提交于
Load whole file in memory to simplify code
-
由 Heinrich Lee Yu 提交于
Process CSV uploads async using a worker then email results
-
- 04 1月, 2019 1 次提交
-
-
由 Shinya Maeda 提交于
Sort out some logic
-
- 01 1月, 2019 1 次提交
-
-
由 Jonathon Reinhart 提交于
gitlab-org/gitlab-shell!166 added support for collecting push options from the environment, and passing them along to the /internal/post_receive API endpoint. This change handles the new push_options JSON element in the payload, and passes them on through to the GitPushService and GitTagPushService services. Futhermore, it adds support for the first push option, ci.skip. With this change, one can use 'git push -o ci.skip' to skip CI pipe execution. Note that the pipeline is still created, but in the "skipped" state, just like with the 'ci skip' commit message text. Implements #18667
-
- 21 12月, 2018 2 次提交
-
-
由 George Tsiolis 提交于
-
由 blackst0ne 提交于
Fix the CVE-2018-16476 vulnerability.
-
- 19 12月, 2018 2 次提交
-
-
由 Zeger-Jan van de Weg 提交于
This action doesn't lean on reduplication, so a short call can me made to the Gitaly server to have the object pool remove its remote to the project pending deletion. https://gitlab.com/gitlab-org/gitaly/blob/f6cd55357/internal/git/objectpool/link.go#L58 When an object pool doesn't have members, this would invalidate the need for a pool. So when a project leaves the pool, the pool will be destroyed on the background. Fixes: https://gitlab.com/gitlab-org/gitaly/issues/1415
-
由 Oswaldo Ferreira 提交于
-
- 12 12月, 2018 1 次提交
-
-
由 Alejandro Rodríguez 提交于
The email is sent to project maintainers containing the last mirror update error. This will allow maintainers to set alarms and react accordingly.
-
- 08 12月, 2018 1 次提交
-
-
由 Zeger-Jan van de Weg 提交于
When a project is forked, the new repository used to be a deep copy of everything stored on disk by leveraging `git clone`. This works well, and makes isolation between repository easy. However, the clone is at the start 100% the same as the origin repository. And in the case of the objects in the object directory, this is almost always going to be a lot of duplication. Object Pools are a way to create a third repository that essentially only exists for its 'objects' subdirectory. This third repository's object directory will be set as alternate location for objects. This means that in the case an object is missing in the local repository, git will look in another location. This other location is the object pool repository. When Git performs garbage collection, it's smart enough to check the alternate location. When objects are duplicated, it will allow git to throw one copy away. This copy is on the local repository, where to pool remains as is. These pools have an origin location, which for now will always be a repository that itself is not a fork. When the root of a fork network is forked by a user, the fork still clones the full repository. Async, the pool repository will be created. Either one of these processes can be done earlier than the other. To handle this race condition, the Join ObjectPool operation is idempotent. Given its idempotent, we can schedule it twice, with the same effect. To accommodate the holding of state two migrations have been added. 1. Added a state column to the pool_repositories column. This column is managed by the state machine, allowing for hooks on transitions. 2. pool_repositories now has a source_project_id. This column in convenient to have for multiple reasons: it has a unique index allowing the database to handle race conditions when creating a new record. Also, it's nice to know who the host is. As that's a short link to the fork networks root. Object pools are only available for public project, which use hashed storage and when forking from the root of the fork network. (That is, the project being forked from itself isn't a fork) In this commit message I use both ObjectPool and Pool repositories, which are alike, but different from each other. ObjectPool refers to whatever is on the disk stored and managed by Gitaly. PoolRepository is the record in the database.
-
- 07 12月, 2018 4 次提交
-
-
由 Tiago Botelho 提交于
The EE merge request can be found here: https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8442
-
由 Douwe Maan 提交于
-
由 Jan Provaznik 提交于
It gathers list of file paths to delete before destroying the parent object. Then after the parent_object is destroyed these paths are scheduled for deletion asynchronously. Carrierwave needed associated model for deleting upload file. To avoid this requirement, simple Fog/File layer is used directly for file deletion, this allows us to use just a simple list of paths.
-
由 Nick Thomas 提交于
-
- 06 12月, 2018 1 次提交
-
-
由 Stan Hu 提交于
Determined by running the script: ``` included = `git grep --name-only ShellAdapter`.chomp.split("\n") used = `git grep --name-only gitlab_shell`.chomp.split("\n") included - used ```
-
- 05 12月, 2018 1 次提交
-
-
由 Shinya Maeda 提交于
-