- 05 6月, 2018 13 次提交
-
-
由 Sean McGivern 提交于
This is tricky: the query was being run in `ObjectStorage::Extension::RecordsUploads#retrieve_from_store!`, but we can't just add batch loading there, because the `#upload=` method there would use the result immediately, making the batch only have one item. Instead, we can pre-emptively add an item to the batch whenever an avatarable object is initialized, and then reuse that batch item in `#retrieve_from_store!`. However, this also has problems: 1. There is a lot of logic in `Avatarable#retrieve_upload_from_batch`. 2. Some of that logic constructs a 'fake' model for the batch key. This should be fine, because of ActiveRecord's override of `#==`, but it relies on that staying the same.
-
由 André Luís 提交于
-
由 Kamil Trzciński 提交于
-
由 Kamil Trzciński 提交于
-
由 Jose Ivan Vargas 提交于
-
由 Jasper Maes 提交于
-
由 Oswaldo Ferreira 提交于
This currently causes 500's errors when loading the MR page (Discussion) in a few scenarios. We were not considering detailed diff headers such as "--- a/doc/update/mysql_to_postgresql.md\n+++ b/doc/update/mysql_to_postgresql.md" to crop the diff. In order to address it, we're now using Gitlab::Diff::Parser, clean the diffs and builds Gitlab::Diff::Line objects we can iterate and filter on.
-
由 Stan Hu 提交于
-
由 Stan Hu 提交于
This was being masked by the statement cache because only one author was used per issue in the test.. Also adds support for an Rspec matcher `exceed_all_query_limit`.
-
由 Stan Hu 提交于
This was being masked by the statement cache because only one author was used per issue in the test.. Also adds support for an Rspec matcher `exceed_all_query_limit`.
-
由 Kamil Trzciński 提交于
-
由 Yorick Peterse 提交于
When importing a GitHub pull request we would perform all work in a single database transaction. This is less than ideal, because we perform various slow Git operations when creating a merge request. This in turn can lead to many DB connections being used, while just waiting for an IO operation to complete. To work around this, we now move most of the heavy lifting out of the database transaction. Some extra error handling is added to ensure we can resume importing a partially imported pull request, instead of just throwing an error. This commit also changes the specs for IssueImporter so they don't rely on deprecated RSpec methods.
-
由 Jarka Kadlecová 提交于
-
- 04 6月, 2018 4 次提交
-
-
由 Francisco Javier López 提交于
-
由 Kamil Trzciński 提交于
-
由 Bob Van Landuyt 提交于
This includes the change that prints the @username of a user instead of the full name. https://gitlab.com/gitlab-org/gitlab-shell/merge_requests/204
-
由 Shinya Maeda 提交于
-
- 03 6月, 2018 3 次提交
-
-
由 Takuya Noguchi 提交于
-
由 Takuya Noguchi 提交于
-
由 Stan Hu 提交于
Now that we are checking `MergeRequest#for_fork?`, we also need the source project preloaded for a merge request.
-
- 02 6月, 2018 4 次提交
-
-
由 Jasper Maes 提交于
-
由 Stan Hu 提交于
attr_encrypted does different things with `key` depending on what mode you are using: 1. In `:per_attribute_iv_and_salt` mode, it generates a hash with the salt: https://github.com/attr-encrypted/encryptor/blob/c3a62c4a9e74686dd95e0548f9dc2a361fdc95d1/lib/encryptor.rb#L77. There is no need to truncate the key to 32 bytes here. 2. In `:per_attribute_iv` mode, it sets the key directly to the password, so truncation to 32 bytes is necessary. Closes #47166
-
由 Sam Beckham 提交于
-
由 Stan Hu 提交于
This version of the gem uses API v4 by default: https://github.com/linchus/omniauth-gitlab/commit/fd13de9f251fdaa72ba0195bda47cd2cb8731084
-
- 01 6月, 2018 7 次提交
-
-
由 🙈 jacopo beschi 🙉 提交于
-
由 samdbeckham 提交于
-
由 NLR 提交于
-
由 Tiago Botelho 提交于
-
由 Francisco Javier López 提交于
-
由 Mark Chao 提交于
"Maintainer" will be freed to be used for #42751
-
由 Paul Slaughter 提交于
-
- 31 5月, 2018 9 次提交
-
-
由 Felipe Artur 提交于
-
由 Imre Farkas 提交于
-
由 Jarka Kadlecová 提交于
-
由 Kushal Pandya 提交于
-
由 Sam Beckham 提交于
-
由 Stan Hu 提交于
In CE, every `Issue` entity is also a `ProjectEntity`, which calls `entity&.project.try(:id)` to show the project ID. In an API request with 100 issues, this would hit the Rails statement cache 100 times for the same project and cause unnecessary overhead as related models would also be loaded. In EE, we call `Issue#supports_weight?` for each issue, which then calls `project&.feature_available?(:issue_weights)`. If the project is not preloaded, this incurs additional overhead, as each individual Project object has to be queried. This can lead to a significant performance hit. In loading the CE project with 100 issues, this contributed to at least 22% of the load time. See https://gitlab.com/gitlab-org/gitlab-ce/issues/47031 for why testing this is a bit tricky.
-
由 Mayra Cabrera 提交于
[ci skip]
-
由 Jasper Maes 提交于
-
由 Francisco Javier López 提交于
-