1. 08 12月, 2018 1 次提交
    • Z
      Allow public forks to be deduplicated · 896c0bdb
      Zeger-Jan van de Weg 提交于
      When a project is forked, the new repository used to be a deep copy of everything
      stored on disk by leveraging `git clone`. This works well, and makes isolation
      between repository easy. However, the clone is at the start 100% the same as the
      origin repository. And in the case of the objects in the object directory, this
      is almost always going to be a lot of duplication.
      
      Object Pools are a way to create a third repository that essentially only exists
      for its 'objects' subdirectory. This third repository's object directory will be
      set as alternate location for objects. This means that in the case an object is
      missing in the local repository, git will look in another location. This other
      location is the object pool repository.
      
      When Git performs garbage collection, it's smart enough to check the
      alternate location. When objects are duplicated, it will allow git to
      throw one copy away. This copy is on the local repository, where to pool
      remains as is.
      
      These pools have an origin location, which for now will always be a
      repository that itself is not a fork. When the root of a fork network is
      forked by a user, the fork still clones the full repository. Async, the
      pool repository will be created.
      
      Either one of these processes can be done earlier than the other. To
      handle this race condition, the Join ObjectPool operation is
      idempotent. Given its idempotent, we can schedule it twice, with the
      same effect.
      
      To accommodate the holding of state two migrations have been added.
      1. Added a state column to the pool_repositories column. This column is
      managed by the state machine, allowing for hooks on transitions.
      2. pool_repositories now has a source_project_id. This column in
      convenient to have for multiple reasons: it has a unique index allowing
      the database to handle race conditions when creating a new record. Also,
      it's nice to know who the host is. As that's a short link to the fork
      networks root.
      
      Object pools are only available for public project, which use hashed
      storage and when forking from the root of the fork network. (That is,
      the project being forked from itself isn't a fork)
      
      In this commit message I use both ObjectPool and Pool repositories,
      which are alike, but different from each other. ObjectPool refers to
      whatever is on the disk stored and managed by Gitaly. PoolRepository is
      the record in the database.
      896c0bdb
  2. 07 12月, 2018 13 次提交
  3. 06 12月, 2018 3 次提交
  4. 05 12月, 2018 12 次提交
    • F
      Rename project's pipelines relation · a6778fc6
      Francisco Javier López 提交于
      a6778fc6
    • N
      Prevent a path traversal attack on global file templates · 69645389
      Nick Thomas 提交于
      The API permits path traversal characters like '../' to be passed down
      to the template finder. Detect these requests and cause them to fail
      with a 500 response code.
      69645389
    • D
      Add UsageData for group/project clusters · 821b4fde
      Dylan Griffith 提交于
      821b4fde
    • G
      ca2c5ddb
    • Revert "LfsToken uses JSONWebToken::HMACToken by default" · 00acef43
      🤖 GitLab Bot 🤖 提交于
      This reverts commit 22954f22
      00acef43
    • S
      Merge request pipelines · e62bfc78
      Shinya Maeda 提交于
      e62bfc78
    • A
      LfsToken uses JSONWebToken::HMACToken by default · 22954f22
      Ash McKenzie 提交于
      LfsToken::HMACToken#token_valid?() will be examined and if false, look
      in redis via LfsToken::LegacyRedisDeviseToken#token_valid?().
      22954f22
    • A
      Use user? instead · 3bccd2b1
      Ash McKenzie 提交于
      3bccd2b1
    • N
      Use a 32-byte version of db_key_base for web hooks · 2f2b0ad3
      Nick Thomas 提交于
      AES-256-GCM cipher mode requires a key that is exactly 32 bytes long.
      We already handle the case when the key is too long, by truncating, but
      the key can also be too short in some installations. Switching to a key
      that is always exactly the right length (by virtue of right-padding
      ASCII 0 characters) allows encryption to proceed, without breaking
      backward compatibility.
      
      When the key is too short, encryption fails with an `ArgumentError`,
      causing the web hooks functionality to be unusable. As a result, zero
      rows can exist with values encrypted with the too-short key.
      
      When the key is too long, it is silently truncated. In this case, the
      key is unchanged, so values encrypted with the new too-long key will
      still be successfully decrypted.
      2f2b0ad3
    • T
      Various improvements to hierarchy sorting · f85440e6
      Thong Kuah 提交于
      - Rename ordered_group_clusters_for_project ->
      ancestor_clusters_for_clusterable
      - Improve name of order option. It makes much more sense to have `hierarchy_order: :asc`
      and `hierarchy_order: :desc`
      - Allow ancestor_clusters_for_clusterable for group
      - Re-use code already present in Project
      f85440e6
    • T
      Deploy to clusters for a project's groups · 5bb2814a
      Thong Kuah 提交于
      Look for matching clusters starting from the closest ancestor, then go
      up the ancestor tree.
      
      Then use Ruby to get clusters for each group in order. Not that
      efficient, considering we will doing up to `NUMBER_OF_ANCESTORS_ALLOWED`
      number of queries, but it's a finite number
      
      Explicitly order query by depth
      
      This allows us to control ordering explicitly and also to reverse the
      order which is useful to allow us to be consistent with
      Clusters::Cluster.on_environment (EE) which does reverse ordering.
      
      Puts querying group clusters behind Feature Flag. Just in case we have
      issues with performance, we can easily disable this
      5bb2814a
    • T
      Modify service so that it can be re-run · d3866fb4
      Thong Kuah 提交于
      If the service fails mid-point, then we should be able to re-run this
      service. So, detect presence of any previously created Kubernetes
      resource and update or create accordingly.
      
      Fix specs accordingly. In the case of finalize_creation_service_spec.rb,
      I decided to stub out the async worker rather than maintaining
      individual stubs for various kubeclient calls for that worker.
      d3866fb4
  5. 04 12月, 2018 11 次提交