- 04 8月, 2016 3 次提交
-
-
由 Herminio Torres 提交于
We never add things `into` projects, we just add them `to` projects. So how about we rename this to `add_users_to_project`. Rename `projects_ids` to `project_ids` by following the convention of rails.
-
由 Stan Hu 提交于
Previously the gitlab-shell version would never be updated if the directory existed via the `gitlab:shell:install` Rake task. This could lead to incompatibility issues or random errors.
-
由 Douwe Maan 提交于
-
- 03 8月, 2016 7 次提交
-
-
由 Yorick Peterse 提交于
By using Rouge::Lexer.find instead of find_fancy() and memoizing the HTML formatter we can speed up the highlighting process by between 1.7 and 1.8 times (at least when measured using synthetic benchmarks). To measure this I used the following benchmark: require 'benchmark/ips' input = '' Dir['./app/controllers/**/*.rb'].each do |controller| input << <<-EOF <pre><code class="ruby">#{File.read(controller).strip}</code></pre> EOF end document = Nokogiri::HTML.fragment(input) filter = Banzai::Filter::SyntaxHighlightFilter.new(document) puts "Input size: #{(input.bytesize.to_f / 1024).round(2)} KB" Benchmark.ips do |bench| bench.report 'call' do filter.call end end This benchmark produces 250 KB of input. Before these changes the timing output would be as follows: Calculating ------------------------------------- call 1.000 i/100ms ------------------------------------------------- call 22.439 (±35.7%) i/s - 93.000 After these changes the output instead is as follows: Calculating ------------------------------------- call 1.000 i/100ms ------------------------------------------------- call 41.283 (±38.8%) i/s - 148.000 Note that due to the fairly high standard deviation and this being a synthetic benchmark it's entirely possible the real-world improvements are smaller.
-
由 James Lopez 提交于
-
由 Yorick Peterse 提交于
By using clever XPath queries we can quite significantly improve the performance of this method. The actual improvement depends a bit on the amount of links used but in my tests the new implementation is usually around 8 times faster than the old one. This was measured using the following benchmark: require 'benchmark/ips' text = '<p>' + Note.select("string_agg(note, '') AS note").limit(50).take[:note] + '</p>' document = Nokogiri::HTML.fragment(text) filter = Banzai::Filter::AutolinkFilter.new(document, autolink: true) puts "Input size: #{(text.bytesize.to_f / 1024 / 1024).round(2)} MB" filter.rinku_parse Benchmark.ips(time: 15) do |bench| bench.report 'text_parse' do filter.text_parse end bench.report 'text_parse_fast' do filter.text_parse_fast end bench.compare! end Here the "text_parse_fast" method is the new implementation and "text_parse" the old one. The input size was around 180 MB. Running this benchmark outputs the following: Input size: 181.16 MB Calculating ------------------------------------- text_parse 1.000 i/100ms text_parse_fast 9.000 i/100ms ------------------------------------------------- text_parse 13.021 (±15.4%) i/s - 188.000 text_parse_fast 112.741 (± 3.5%) i/s - 1.692k Comparison: text_parse_fast: 112.7 i/s text_parse: 13.0 i/s - 8.66x slower Again the production timings may (and most likely will) vary depending on the input being processed.
-
由 Paco Guzman 提交于
So we have raw_diffs too
-
由 Paco Guzman 提交于
This object will manage Gitlab::Git::Compare instances
-
由 Paco Guzman 提交于
Instead calling diff_collection.count use diff_collection.size which is cache on the diff_collection
-
由 Paco Guzman 提交于
Introducing the concept of SafeDiffs which relates diffs with UI highlighting.
-
- 02 8月, 2016 4 次提交
-
-
由 James Lopez 提交于
-
由 Yorick Peterse 提交于
This ensures this CI step works properly even when doing a shallow clone.
-
由 winniehell 提交于
-
由 Ahmad Sherif 提交于
Closes #20488
-
- 01 8月, 2016 8 次提交
-
-
由 Ahmad Sherif 提交于
Closes #20452
-
由 Paco Guzman 提交于
-
由 Stan Hu 提交于
Closes #20440
-
由 James Lopez 提交于
-
由 James Lopez 提交于
squashed - fixed label and milestone association problems, updated specs and refactored reader class a bit
-
由 James Lopez 提交于
-
由 James Lopez 提交于
added changelog fix specs refactored code based on feedback fix rubocop warning
-
由 Z.J. van de Weg 提交于
-
- 30 7月, 2016 1 次提交
-
-
由 Z.J. van de Weg 提交于
Also a couple of minor edits for this branch are included
-
- 29 7月, 2016 7 次提交
-
-
由 Grzegorz Bizon 提交于
-
由 Z.J. van de Weg 提交于
-
由 Z.J. van de Weg 提交于
-
由 Yorick Peterse 提交于
The method Ability.issues_readable_by_user takes a list of users and an optional user and returns an Array of issues readable by said user. This method in turn is used by Banzai::ReferenceParser::IssueParser#nodes_visible_to_user so this method no longer needs to get all the available abilities just to check if a user has the "read_issue" ability. To test this I benchmarked an issue with 222 comments on my development environment. Using these changes the time spent in nodes_visible_to_user was reduced from around 120 ms to around 40 ms.
-
由 Timothy Andrew 提交于
1. It makes sense to reuse these constants since we had them duplicated in the previous enum implementation. This also simplifies our `check_access` implementation, because we can use `project.team.max_member_access` directly. 2. Use `accepts_nested_attributes_for` to create push/merge access levels. This was a bit fiddly to set up, but this simplifies our code by quite a large amount. We can even get rid of `ProtectedBranches::BaseService`. 3. Move API handling back into the API (previously in `ProtectedBranches::BaseService#translate_api_params`. 4. The protected branch services now return a `ProtectedBranch` rather than `true/false`. 5. Run `load_protected_branches` on-demand in the `create` action, to prevent it being called unneccessarily. 6. "Masters" is pre-selected as the default option for "Allowed to Push" and "Allowed to Merge". 7. These changes were based on a review from @rymai in !5081.
-
由 Timothy Andrew 提交于
1. The new data model moves from `developers_can_{push,merge}` to `allowed_to_{push,merge}`. 2. The API interface has not been changed. It still accepts `developers_can_push` and `developers_can_merge` as options. These attributes are inferred from the new data model. 3. Modify the protected branch create/update services to translate from the API interface to our current data model.
-
由 Timothy Andrew 提交于
1. The crux of this change is in `UserAccess`, which looks through all the access levels, asking each if the user has access to push/merge for the current project. 2. Update the `protected_branches` factory to create access levels as necessary. 3. Fix and augment `user_access` and `git_access` specs.
-
- 28 7月, 2016 3 次提交
-
-
由 James Lopez 提交于
fix spec and added changelog
-
由 Yorick Peterse 提交于
This reduces the overhead of the method instrumentation code primarily by reducing the number of method calls. There are also some other small optimisations such as not casting timing values to Floats (there's no particular need for this), using Symbols for method call metric names, and reducing the number of Hash lookups for instrumented methods. The exact impact depends on the code being executed. For example, for a method that's only called once the difference won't be very noticeable. However, for methods that are called many times the difference can be more significant. For example, the loading time of a large commit (nrclark/dummy_project@81ebdea5df2fb42e59257cb3eaad671a5c53ca36) was reduced from around 19 seconds to around 15 seconds using these changes.
-
由 dixpac 提交于
-
- 27 7月, 2016 7 次提交
-
-
由 Ahmad Sherif 提交于
-
由 Patricio Cano 提交于
Refactor spam validation to a concern that can be easily reused and improve legibility in `SpamCheckService`
-
由 Stan Hu 提交于
NotesHelper#note_editable? and ProjectTeam#human_max_access currently take about 16% of the load time of an issue page. This MR preloads the maximum access level of users for all notes in issues and merge requests with several queries instead of one per user and caches the result in RequestStore.
-
由 Alejandro Rodríguez 提交于
-
由 Patricio Cano 提交于
-
由 Patricio Cano 提交于
-
由 Patricio Cano 提交于
-