1. 09 8月, 2016 2 次提交
  2. 08 8月, 2016 3 次提交
  3. 06 8月, 2016 3 次提交
  4. 05 8月, 2016 4 次提交
  5. 04 8月, 2016 7 次提交
  6. 03 8月, 2016 9 次提交
    • Y
      Improve performance of SyntaxHighlightFilter · 038d6feb
      Yorick Peterse 提交于
      By using Rouge::Lexer.find instead of find_fancy() and memoizing the
      HTML formatter we can speed up the highlighting process by between 1.7
      and 1.8 times (at least when measured using synthetic benchmarks). To
      measure this I used the following benchmark:
      
          require 'benchmark/ips'
      
          input = ''
      
          Dir['./app/controllers/**/*.rb'].each do |controller|
            input << <<-EOF
            <pre><code class="ruby">#{File.read(controller).strip}</code></pre>
      
            EOF
          end
      
          document = Nokogiri::HTML.fragment(input)
          filter = Banzai::Filter::SyntaxHighlightFilter.new(document)
      
          puts "Input size: #{(input.bytesize.to_f / 1024).round(2)} KB"
      
          Benchmark.ips do |bench|
            bench.report 'call' do
              filter.call
            end
          end
      
      This benchmark produces 250 KB of input. Before these changes the timing
      output would be as follows:
      
          Calculating -------------------------------------
                          call     1.000  i/100ms
          -------------------------------------------------
                          call     22.439  (±35.7%) i/s -     93.000
      
      After these changes the output instead is as follows:
      
      Calculating -------------------------------------
                      call     1.000  i/100ms
      -------------------------------------------------
                      call     41.283  (±38.8%) i/s -    148.000
      
      Note that due to the fairly high standard deviation and this being a
      synthetic benchmark it's entirely possible the real-world improvements
      are smaller.
      038d6feb
    • Z
      Endpoints to enable and disable deploy keys · da3d3ba8
      Z.J. van de Weg 提交于
      Resolves #20123
      da3d3ba8
    • J
      Stop 'git push' over HTTP early · b8f754dd
      Jacob Vosmaer 提交于
      Before this change we always let users push Git data over HTTP before
      deciding whether to accept to push. This was different from pushing
      over SSH where we terminate a 'git push' early if we already know the
      user is not allowed to push.
      
      This change let Git over HTTP follow the same behavior as Git over
      SSH. We also distinguish between HTTP 404 and 403 responses when
      denying Git requests, depending on whether the user is allowed to know
      the project exists.
      b8f754dd
    • J
      Fix Import/Export error checking versions · f87eb250
      James Lopez 提交于
      f87eb250
    • Y
      Improve AutolinkFilter#text_parse performance · dd35c3dd
      Yorick Peterse 提交于
      By using clever XPath queries we can quite significantly improve the
      performance of this method. The actual improvement depends a bit on the
      amount of links used but in my tests the new implementation is usually
      around 8 times faster than the old one. This was measured using the
      following benchmark:
      
          require 'benchmark/ips'
      
          text = '<p>' + Note.select("string_agg(note, '') AS note").limit(50).take[:note] + '</p>'
          document = Nokogiri::HTML.fragment(text)
          filter = Banzai::Filter::AutolinkFilter.new(document, autolink: true)
      
          puts "Input size: #{(text.bytesize.to_f / 1024 / 1024).round(2)} MB"
      
          filter.rinku_parse
      
          Benchmark.ips(time: 15) do |bench|
            bench.report 'text_parse' do
              filter.text_parse
            end
      
            bench.report 'text_parse_fast' do
              filter.text_parse_fast
            end
      
            bench.compare!
          end
      
      Here the "text_parse_fast" method is the new implementation and
      "text_parse" the old one. The input size was around 180 MB. Running this
      benchmark outputs the following:
      
          Input size: 181.16 MB
          Calculating -------------------------------------
                    text_parse     1.000  i/100ms
               text_parse_fast     9.000  i/100ms
          -------------------------------------------------
                    text_parse     13.021  (±15.4%) i/s -    188.000
               text_parse_fast    112.741  (± 3.5%) i/s -      1.692k
      
          Comparison:
               text_parse_fast:      112.7 i/s
                    text_parse:       13.0 i/s - 8.66x slower
      
      Again the production timings may (and most likely will) vary depending
      on the input being processed.
      dd35c3dd
    • P
      switch from diff_file_collection to diffs · c86c1905
      Paco Guzman 提交于
      So we have raw_diffs too
      c86c1905
    • P
      Introduce Compare model in the codebase. · 1d0c7b74
      Paco Guzman 提交于
      This object will manage Gitlab::Git::Compare instances
      1d0c7b74
    • P
      Move to Gitlab::Diff::FileCollection · 8f359ea9
      Paco Guzman 提交于
      Instead calling diff_collection.count use diff_collection.size which is cache on the diff_collection
      8f359ea9
    • P
      Cache highlighted diff lines for merge requests · cd7c2cb6
      Paco Guzman 提交于
      Introducing the concept of SafeDiffs which relates 
      diffs with UI highlighting.
      cd7c2cb6
  7. 02 8月, 2016 4 次提交
  8. 01 8月, 2016 8 次提交