- 09 8月, 2016 2 次提交
-
-
由 Connor Shea 提交于
-
由 Z.J. van de Weg 提交于
-
- 08 8月, 2016 3 次提交
-
-
由 James Lopez 提交于
-
由 Grzegorz Bizon 提交于
-
由 Jacob Vosmaer 提交于
This reverts commit 47b5b441. See https://gitlab.com/gitlab-org/gitlab-ce/issues/17877#note_13488047
-
- 06 8月, 2016 3 次提交
-
-
由 Gabriel Mazetto 提交于
-
由 Gabriel Mazetto 提交于
-
由 Gabriel Mazetto 提交于
-
- 05 8月, 2016 4 次提交
-
-
由 Jacob Vosmaer 提交于
The change to base64-encoding the third argument to PostReceive in gitlab-shell made our Sidekiq ArgumentsLogger a little less useful. This change adds a log statement for the decoded data. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/20381
-
由 Z.J. van de Weg 提交于
-
由 winniehell 提交于
-
由 tiagonbotelho 提交于
-
- 04 8月, 2016 7 次提交
-
-
由 Z.J. van de Weg 提交于
Also, fix the failing test in the process
-
由 Herminio Torres 提交于
We never add things `into` projects, we just add them `to` projects. So how about we rename this to `add_users_to_project`. Rename `projects_ids` to `project_ids` by following the convention of rails.
-
由 James Lopez 提交于
-
由 Paco Guzman 提交于
Signed-off-by: NPaco Guzman <pacoguzmanp@gmail.com>
-
由 Z.J. van de Weg 提交于
Also a minor clean up of the post endpoint
-
由 Stan Hu 提交于
Previously the gitlab-shell version would never be updated if the directory existed via the `gitlab:shell:install` Rake task. This could lead to incompatibility issues or random errors.
-
由 Douwe Maan 提交于
-
- 03 8月, 2016 9 次提交
-
-
由 Yorick Peterse 提交于
By using Rouge::Lexer.find instead of find_fancy() and memoizing the HTML formatter we can speed up the highlighting process by between 1.7 and 1.8 times (at least when measured using synthetic benchmarks). To measure this I used the following benchmark: require 'benchmark/ips' input = '' Dir['./app/controllers/**/*.rb'].each do |controller| input << <<-EOF <pre><code class="ruby">#{File.read(controller).strip}</code></pre> EOF end document = Nokogiri::HTML.fragment(input) filter = Banzai::Filter::SyntaxHighlightFilter.new(document) puts "Input size: #{(input.bytesize.to_f / 1024).round(2)} KB" Benchmark.ips do |bench| bench.report 'call' do filter.call end end This benchmark produces 250 KB of input. Before these changes the timing output would be as follows: Calculating ------------------------------------- call 1.000 i/100ms ------------------------------------------------- call 22.439 (±35.7%) i/s - 93.000 After these changes the output instead is as follows: Calculating ------------------------------------- call 1.000 i/100ms ------------------------------------------------- call 41.283 (±38.8%) i/s - 148.000 Note that due to the fairly high standard deviation and this being a synthetic benchmark it's entirely possible the real-world improvements are smaller.
-
由 Z.J. van de Weg 提交于
Resolves #20123
-
由 Jacob Vosmaer 提交于
Before this change we always let users push Git data over HTTP before deciding whether to accept to push. This was different from pushing over SSH where we terminate a 'git push' early if we already know the user is not allowed to push. This change let Git over HTTP follow the same behavior as Git over SSH. We also distinguish between HTTP 404 and 403 responses when denying Git requests, depending on whether the user is allowed to know the project exists.
-
由 James Lopez 提交于
-
由 Yorick Peterse 提交于
By using clever XPath queries we can quite significantly improve the performance of this method. The actual improvement depends a bit on the amount of links used but in my tests the new implementation is usually around 8 times faster than the old one. This was measured using the following benchmark: require 'benchmark/ips' text = '<p>' + Note.select("string_agg(note, '') AS note").limit(50).take[:note] + '</p>' document = Nokogiri::HTML.fragment(text) filter = Banzai::Filter::AutolinkFilter.new(document, autolink: true) puts "Input size: #{(text.bytesize.to_f / 1024 / 1024).round(2)} MB" filter.rinku_parse Benchmark.ips(time: 15) do |bench| bench.report 'text_parse' do filter.text_parse end bench.report 'text_parse_fast' do filter.text_parse_fast end bench.compare! end Here the "text_parse_fast" method is the new implementation and "text_parse" the old one. The input size was around 180 MB. Running this benchmark outputs the following: Input size: 181.16 MB Calculating ------------------------------------- text_parse 1.000 i/100ms text_parse_fast 9.000 i/100ms ------------------------------------------------- text_parse 13.021 (±15.4%) i/s - 188.000 text_parse_fast 112.741 (± 3.5%) i/s - 1.692k Comparison: text_parse_fast: 112.7 i/s text_parse: 13.0 i/s - 8.66x slower Again the production timings may (and most likely will) vary depending on the input being processed.
-
由 Paco Guzman 提交于
So we have raw_diffs too
-
由 Paco Guzman 提交于
This object will manage Gitlab::Git::Compare instances
-
由 Paco Guzman 提交于
Instead calling diff_collection.count use diff_collection.size which is cache on the diff_collection
-
由 Paco Guzman 提交于
Introducing the concept of SafeDiffs which relates diffs with UI highlighting.
-
- 02 8月, 2016 4 次提交
-
-
由 James Lopez 提交于
-
由 Yorick Peterse 提交于
This ensures this CI step works properly even when doing a shallow clone.
-
由 winniehell 提交于
-
由 Ahmad Sherif 提交于
Closes #20488
-
- 01 8月, 2016 8 次提交
-
-
由 Ahmad Sherif 提交于
Closes #20452
-
由 Paco Guzman 提交于
-
由 Stan Hu 提交于
Closes #20440
-
由 James Lopez 提交于
-
由 James Lopez 提交于
squashed - fixed label and milestone association problems, updated specs and refactored reader class a bit
-
由 James Lopez 提交于
-
由 James Lopez 提交于
added changelog fix specs refactored code based on feedback fix rubocop warning
-
由 Z.J. van de Weg 提交于
-