1. 04 11月, 2015 1 次提交
  2. 03 11月, 2015 4 次提交
  3. 02 11月, 2015 3 次提交
  4. 30 10月, 2015 2 次提交
    • Y
      Adjusted ips/sec for find_by_any_email benchmarks · 6d3068be
      Yorick Peterse 提交于
      While these benchmarks run at roughly 1500 i/sec setting the threshold
      to 1000 leaves some room for deviations (e.g. due to different DB
      setups).
      6d3068be
    • Y
      Improve performance of User.find_by_any_email · 49c081b9
      Yorick Peterse 提交于
      This query used to rely on a JOIN, effectively producing the following
      SQL:
      
          SELECT users.*
          FROM users
          LEFT OUTER JOIN emails ON emails.user_id = users.id
          WHERE (users.email = X OR emails.email = X)
          LIMIT 1;
      
      The use of a JOIN means having to scan over all Emails and users, join
      them together and then filter out the rows that don't match the criteria
      (though this step may be taken into account already when joining).
      
      In the new setup this query instead uses a sub-query, producing the
      following SQL:
      
          SELECT *
          FROM users
          WHERE id IN (select user_id FROM emails WHERE email = X)
          OR email = X
          LIMIT 1;
      
      This query has the benefit that it:
      
      1. Doesn't have to JOIN any rows
      2. Only has to operate on a relatively small set of rows from the
         "emails" table.
      
      Since most users will only have a handful of Emails associated
      (certainly not hundreds or even thousands) the size of the set returned
      by the sub-query is small enough that it should not become problematic.
      
      Performance of the old versus new version can be measured using the
      following benchmark:
      
          # Save this in ./bench.rb
          require 'benchmark/ips'
      
          email = 'yorick@gitlab.com'
      
          def User.find_by_any_email_old(email)
            user_table = arel_table
            email_table = Email.arel_table
      
            query = user_table.
              project(user_table[Arel.star]).
              join(email_table, Arel::Nodes::OuterJoin).
              on(user_table[:id].eq(email_table[:user_id])).
              where(user_table[:email].eq(email).or(email_table[:email].eq(email)))
      
            find_by_sql(query.to_sql).first
          end
      
          Benchmark.ips do |bench|
            bench.report 'original' do
              User.find_by_any_email_old(email)
            end
      
            bench.report 'optimized' do
              User.find_by_any_email(email)
            end
      
            bench.compare!
          end
      
      Running this locally using "bundle exec rails r bench.rb" produces the
      following output:
      
          Calculating -------------------------------------
                      original     1.000  i/100ms
                     optimized    93.000  i/100ms
          -------------------------------------------------
                      original     11.103  (± 0.0%) i/s -     56.000
                     optimized    948.713  (± 5.3%) i/s -      4.743k
      
          Comparison:
                     optimized:      948.7 i/s
                      original:       11.1 i/s - 85.45x slower
      
      In other words, the new setup is 85x faster compared to the old setup,
      at least when running this benchmark locally.
      
      For GitLab.com these improvements result in User.find_by_any_email
      taking only ~170 ms to run, instead of around 800 ms. While this is
      "only" an improvement of about 4.5 times (instead of 85x) it's still
      significantly better than before.
      
      Fixes #3242
      49c081b9
  5. 29 10月, 2015 6 次提交
  6. 28 10月, 2015 1 次提交
  7. 26 10月, 2015 1 次提交
    • K
      Fix CI badge · 271ad4e3
      Kamil Trzcinski 提交于
      The previous code relied on having on ref stored in commit, however the ref was moved to the build.
      271ad4e3
  8. 23 10月, 2015 3 次提交
  9. 22 10月, 2015 2 次提交
  10. 21 10月, 2015 3 次提交
  11. 20 10月, 2015 5 次提交
  12. 19 10月, 2015 1 次提交
    • Y
      Improve performance of sorting milestone issues · 4ff75e31
      Yorick Peterse 提交于
      This cuts down the time it takes to sort issues of a milestone by about
      10x. In the previous setup the code would run a SQL query for every
      issue that had to be sorted. The new setup instead runs a single SQL
      query to update all the given issues at once.
      
      The attached benchmark used to run at around 60 iterations per second,
      using the new setup this hovers around 600 iterations per second. Timing
      wise a request to update a milestone with 40-something issues would take
      about 760 ms, in the new setup this only takes about 130 ms.
      
      Fixes #3066
      4ff75e31
  13. 18 10月, 2015 5 次提交
  14. 16 10月, 2015 3 次提交