- 27 2月, 2019 1 次提交
-
-
由 Jacopo 提交于
The API get projects/:id/traffic/fetches allows user with write access to the repository to get the number of clones for the last 30 days.
-
- 06 2月, 2019 1 次提交
-
-
由 Stan Hu 提交于
When hashed storage is in use, it's helpful to have the project name associated with the request. Closes https://gitlab.com/gitlab-org/gitaly/issues/1394
-
- 04 2月, 2019 1 次提交
-
-
由 Felipe Artur 提交于
-
- 31 1月, 2019 3 次提交
-
-
由 Kamil Trzciński 提交于
RubyZip allows us to perform strong validation of expanded paths where we do extract file. We introduce the following additional checks to extract routines: 1. None of path components can be symlinked, 2. We drop privileges support for directories, 3. Symlink source needs to point within the target directory, like `public/`, 4. The symlink source needs to exist ahead of time.
-
由 Francisco Javier López 提交于
-
由 James Lopez 提交于
-
- 26 1月, 2019 3 次提交
-
-
由 Gabriel Mazetto 提交于
We need this new state for the Geo event logic in EE
-
由 Gabriel Mazetto 提交于
Specs were reviewed and improved to better cover the current behavior. There was some standardization done as well to facilitate the implementation of the rollback functionality. StorageMigratorWorker was extracted to HashedStorage namespace were RollbackerWorker will live one as well.
-
由 Gabriel Mazetto 提交于
This is part of the refactor to include a RollbackService into HashedStorage module
-
- 25 1月, 2019 1 次提交
-
-
由 Kamil Trzciński 提交于
This includes a set of APIs to manipulate container registry. This includes also an ability to delete tags based on requested criteria, like keep-last-n, matching-name, older-than.
-
- 24 1月, 2019 1 次提交
-
-
由 Rémy Coutable 提交于
Signed-off-by: NRémy Coutable <remy@rymai.me>
-
- 23 1月, 2019 1 次提交
-
-
由 Kamil Trzciński 提交于
RubyZip allows us to perform strong validation of expanded paths where we do extract file. We introduce the following additional checks to extract routines: 1. None of path components can be symlinked, 2. We drop privileges support for directories, 3. Symlink source needs to point within the target directory, like `public/`, 4. The symlink source needs to exist ahead of time.
-
- 22 1月, 2019 2 次提交
-
-
由 Gabriel Mazetto 提交于
We still rely on the Dirty API for project rename (before/after) values, but we don't access the dirty api from the service class anymore. The previous value is now part of the initialization, which makes it easier to test and the behavior is clearer. The same was done with the `rename_repo` on the Storage classes, we now provide before and after values as part of the method signature.
-
由 Gabriel Mazetto 提交于
During a previous refactor on project model, code related to the hashed storage was extracted into AfterRenameService, see 4b9c17f1. The "path_before" was changed from using `previous_changes['path']` to `path_was`. They are not equivalent. `path_was` exists reliably only *before* persisting to the database. After database persistence is confirmed, the value is moved to `previous_changes[:attribute_name]`. Because the repository/attachments rename or storage upgrade happens after it was persisted to the database, we were in fact not informing the right parameters (and therefore not doing what it was supposed to).
-
- 21 1月, 2019 1 次提交
-
-
由 Francisco Javier López 提交于
-
- 18 1月, 2019 1 次提交
-
-
由 Oswaldo Ferreira 提交于
1. When removing projects, we can end-up leaving the +deleted repo path dirty and not successfully removing the non-deleted namespace (mv process is not atomic and can be killed without fully moving the path). 2. In order to solve that, we're adding a clean-up phase on ensure which will schedule possible staled +deleted path deletion. Note that we don't check the current state (if there is or not a repo) in order to schedule the deletion. That's intentional in order to leverage Gitlab::GitalyClient::NamespaceService#remove idempotency and ensure consistency.
-
- 16 1月, 2019 2 次提交
-
-
由 Yorick Peterse 提交于
This refactors some of the logic used for protecting default branches, in particular Project#after_create_default_branch. The logic for this method is moved into a separate service class. Ideally we'd get rid of Project#after_create_default_branch entirely, but unfortunately Project#after_import depends on it. This means it has to stick around until we also refactor Project#after_import. For branch protection levels we introduce Gitlab::Access::BranchProtection, which provides a small wrapper around Integer based branch protection levels. Using this class removes the need for having to constantly refer to Gitlab::Access::PROTECTION_* constants.
-
由 Kamil Trzciński 提交于
-
- 10 1月, 2019 1 次提交
-
-
由 Reuben Pereira 提交于
-
- 08 1月, 2019 2 次提交
-
-
由 Gabriel Mazetto 提交于
In the previous code, we locked the project during the migration scheduling step, which works fine for small setups, but can be problematic in really big installations. We now moved the logic to inside the worker, so we minimize the time a project will be read-only. We also make sure we only do that if reference counter is `0` (no current operation is in progress).
-
由 Peter Leitzen 提交于
Re-use operations controller which already handles tracing settings.
-
- 07 1月, 2019 1 次提交
-
-
由 James Lopez 提交于
-
- 06 1月, 2019 1 次提交
-
-
由 Peter Leitzen 提交于
This commit prepares the structure for the upcoming feature error tracking.
-
- 22 12月, 2018 2 次提交
-
-
- 18 12月, 2018 1 次提交
-
-
由 Francisco Javier López 提交于
-
- 14 12月, 2018 1 次提交
-
-
由 Felipe Artur 提交于
Fix leaking information of confidential issues on TODOs when user is downgraded to guest access.
-
- 12 12月, 2018 1 次提交
-
-
由 Nick Thomas 提交于
-
- 08 12月, 2018 1 次提交
-
-
由 Zeger-Jan van de Weg 提交于
When a project is forked, the new repository used to be a deep copy of everything stored on disk by leveraging `git clone`. This works well, and makes isolation between repository easy. However, the clone is at the start 100% the same as the origin repository. And in the case of the objects in the object directory, this is almost always going to be a lot of duplication. Object Pools are a way to create a third repository that essentially only exists for its 'objects' subdirectory. This third repository's object directory will be set as alternate location for objects. This means that in the case an object is missing in the local repository, git will look in another location. This other location is the object pool repository. When Git performs garbage collection, it's smart enough to check the alternate location. When objects are duplicated, it will allow git to throw one copy away. This copy is on the local repository, where to pool remains as is. These pools have an origin location, which for now will always be a repository that itself is not a fork. When the root of a fork network is forked by a user, the fork still clones the full repository. Async, the pool repository will be created. Either one of these processes can be done earlier than the other. To handle this race condition, the Join ObjectPool operation is idempotent. Given its idempotent, we can schedule it twice, with the same effect. To accommodate the holding of state two migrations have been added. 1. Added a state column to the pool_repositories column. This column is managed by the state machine, allowing for hooks on transitions. 2. pool_repositories now has a source_project_id. This column in convenient to have for multiple reasons: it has a unique index allowing the database to handle race conditions when creating a new record. Also, it's nice to know who the host is. As that's a short link to the fork networks root. Object pools are only available for public project, which use hashed storage and when forking from the root of the fork network. (That is, the project being forked from itself isn't a fork) In this commit message I use both ObjectPool and Pool repositories, which are alike, but different from each other. ObjectPool refers to whatever is on the disk stored and managed by Gitaly. PoolRepository is the record in the database.
-
- 07 12月, 2018 1 次提交
-
-
由 Nick Thomas 提交于
-
- 05 12月, 2018 2 次提交
-
-
由 Thong Kuah 提交于
This reflects how we now create or update
-
由 Thong Kuah 提交于
AFAIK the only relevant place is Projects::CreateService, this gets called when user creates a new project, forks a new project and does those things via the api. Also create k8s namespace for new group hierarchy when transferring project between groups Uses new Refresh service to create k8s namespaces - Ensure we use Cluster#cluster_project If a project has multiple clusters (EE), using Project#cluster_project is not guaranteed to return the cluster_project for this cluster. So switch to using Cluster#cluster_project - at this stage a cluster can only have 1 cluster_project. Also, remove rescue so that sidekiq can retry
-
- 27 11月, 2018 1 次提交
-
-
由 Tiago Botelho 提交于
Clears the import related columns and code from the Project model over to the ProjectImportState model
-
- 19 11月, 2018 1 次提交
-
-
由 Nick Thomas 提交于
-
- 01 11月, 2018 1 次提交
-
-
由 George Tsiolis 提交于
-
- 22 10月, 2018 1 次提交
-
-
由 Yorick Peterse 提交于
This moves the logic of Project#rename_repo and all methods _only_ used by this method into a new service class: Projects::AfterRenameService. By moving this code into a separate service class we can more easily refactor it, and we also get rid of some RuboCop "disable" statements automatically. During the refactoring of this code, I removed most of the explicit logging using Gitlab::AppLogger. The data that was logged would not be useful when debugging renaming issues, as it does not add any value on top of data provided by users. I also removed a variety of comments that either mentioned something the code does in literal form, or contained various grammatical errors. Instead we now resort to more clearly named methods, removing the need for code comments. This method was chosen based on analysis in https://gitlab.com/gitlab-org/release/framework/issues/28. In this issue we determined this method has seen a total of 293 lines being changed in it. We also noticed that RuboCop determined the ABC size (https://www.softwarerenovation.com/ABCMetric.pdf) was too great.
-
- 19 10月, 2018 1 次提交
-
-
由 Bob Van Landuyt 提交于
This removes the `ForkedProjectLink` model that has been replaced by the `ForkNetworkMember` and `ForkNetwork` combination. All existing relations have been adjusted to use these new models. The `forked_project_link` table has been dropped. The "Forks" count on the admin dashboard has been updated to count all `ForkNetworkMember` rows and deduct the number of `ForkNetwork` rows. This is because now the "root-project" of a fork network also has a `ForkNetworkMember` row. This count could become inaccurate when the root of a fork network is deleted.
-
- 16 10月, 2018 1 次提交
-
-
- 11 10月, 2018 1 次提交
-
-
由 Stan Hu 提交于
Project deletions were failing with "Can't modify frozen hash" because: 1. Project#remove_exports was called in the after_destroy hook 2. This would remove the file and update ImportExportUpload 3. ImportExportUpload#save would attempt to write to a destroyed model To avoid this, we just check if ImportExportUpload has been destroyed before attempting to save it. This would have a side effect of not running after_commit hooks to delete the repository on disk, making it impossible to delete the project entirely. Closes #52362
-
- 05 10月, 2018 1 次提交
-
-
由 Tuomo Ala-Vannesluoma 提交于
-