1. 04 1月, 2018 1 次提交
    • Y
      Eager load event target authors whenever possible · dac51ace
      Yorick Peterse 提交于
      This ensures that the "author" association of an event's "target"
      association is eager loaded whenever the "target" association defines an
      "author" association. This in turn solves the N+1 query problem we first
      tried to solve in
      https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/15788 but caused
      problems when displaying milestones as those don't define an "author"
      association.
      
      The approach in this commit does mean that the authors are _always_
      eager loaded since this takes place in the "belongs_to" block. This
      however shouldn't pose too much of a problem, and as far as I can tell
      there's no real way around this unfortunately.
      dac51ace
  2. 22 12月, 2017 1 次提交
  3. 06 9月, 2017 1 次提交
    • Y
      Finish migration to the new events setup · 235b105c
      Yorick Peterse 提交于
      This finishes the procedure for migrating events from the old format
      into the new format. Code no longer uses the old setup and the database
      tables used during the migration process are swapped, with the old table
      being dropped.
      
      While the database migration can be reversed this will 1) take a lot of
      time as data has to be coped around 2) won't restore data in the
      "events.data" column as we have no way of restoring this.
      
      Fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/37241
      235b105c
  4. 23 8月, 2017 1 次提交
  5. 10 8月, 2017 1 次提交
    • Y
      Migrate events into a new format · 0395c471
      Yorick Peterse 提交于
      This commit migrates events data in such a way that push events are
      stored much more efficiently. This is done by creating a shadow table
      called "events_for_migration", and a table called "push_event_payloads"
      which is used for storing push data of push events. The background
      migration in this commit will copy events from the "events" table into
      the "events_for_migration" table, push events in will also have a row
      created in "push_event_payloads".
      
      This approach allows us to reclaim space in the next release by simply
      swapping the "events" and "events_for_migration" tables, then dropping
      the old events (now "events_for_migration") table.
      
      The new table structure is also optimised for storage space, and does
      not include the unused "title" column nor the "data" column (since this
      data is moved to "push_event_payloads").
      
      == Newly Created Events
      
      Newly created events are inserted into both "events" and
      "events_for_migration", both using the exact same primary key value. The
      table "push_event_payloads" in turn has a foreign key to the _shadow_
      table. This removes the need for recreating and validating the foreign
      key after swapping the tables. Since the shadow table also has a foreign
      key to "projects.id" we also don't have to worry about orphaned rows.
      
      This approach however does require some additional storage as we're
      duplicating a portion of the events data for at least 1 release. The
      exact amount is hard to estimate, but for GitLab.com this is expected to
      be between 10 and 20 GB at most. The background migration in this commit
      deliberately does _not_ update the "events" table as doing so would put
      a lot of pressure on PostgreSQL's auto vacuuming system.
      
      == Supporting Both Old And New Events
      
      Application code has also been adjusted to support push events using
      both the old and new data formats. This is done by creating a PushEvent
      class which extends the regular Event class. Using Rails' Single Table
      Inheritance system we can ensure the right class is used for the right
      data, which in this case is based on the value of `events.action`. To
      support displaying old and new data at the same time the PushEvent class
      re-defines a few methods of the Event class, falling back to their
      original implementations for push events in the old format.
      
      Once all existing events have been migrated the various push event
      related methods can be removed from the Event model, and the calls to
      `super` can be removed from the methods in the PushEvent model.
      
      The UI and event atom feed have also been slightly changed to better
      handle this new setup, fortunately only a few changes were necessary to
      make this work.
      
      == API Changes
      
      The API only displays push data of events in the new format. Supporting
      both formats in the API is a bit more difficult compared to the UI.
      Since the old push data was not really well documented (apart from one
      example that used an incorrect "action" nmae) I decided that supporting
      both was not worth the effort, especially since events will be migrated
      in a few days _and_ new events are created in the correct format.
      0395c471
  6. 03 8月, 2017 1 次提交
  7. 27 7月, 2017 1 次提交
  8. 21 6月, 2017 1 次提交
  9. 05 5月, 2017 1 次提交
  10. 04 5月, 2017 1 次提交
  11. 23 2月, 2017 2 次提交
  12. 02 2月, 2017 1 次提交
  13. 27 1月, 2017 1 次提交
  14. 25 11月, 2016 1 次提交
  15. 16 11月, 2016 1 次提交
  16. 09 11月, 2016 1 次提交
  17. 21 10月, 2016 1 次提交
    • C
      Differentiate the expire from leave event · f488b9f7
      Callum Dryden 提交于
      At the moment we cannot see weather a user left a project due to their
      membership expiring of if they themselves opted to leave the project.
      This adds a new event type that allows us to make this differentiation.
      Note that is not really feasable to go back and reliably fix up the
      previous events. As a result the events for previous expire removals
      will remain the same however events of this nature going forward will be
      correctly represented.
      f488b9f7
  18. 20 10月, 2016 1 次提交
    • C
      Differentiate the expire from leave event · 9124310f
      Callum Dryden 提交于
      At the moment we cannot see weather a user left a project due to their
      membership expiring of if they themselves opted to leave the project.
      This adds a new event type that allows us to make this differentiation.
      Note that is not really feasable to go back and reliably fix up the
      previous events. As a result the events for previous expire removals
      will remain the same however events of this nature going forward will be
      correctly represented.
      9124310f
  19. 13 10月, 2016 1 次提交
  20. 11 10月, 2016 1 次提交
  21. 05 10月, 2016 1 次提交
    • Y
      Remove lease from Event#reset_project_activity · c9bcfc63
      Yorick Peterse 提交于
      Per GitLab.com's performance metrics this method could take up to 5
      seconds of wall time to complete, while only taking 1-2 milliseconds of
      CPU time. Removing the Redis lease in favour of conditional updates
      allows us to work around this.
      
      A slight drawback is that this allows for multiple threads/processes to
      try and update the same row. However, only a single thread/process will
      ever win since the UPDATE query uses a WHERE condition to only update
      rows that were not updated in the last hour.
      
      Fixes gitlab-org/gitlab-ce#22473
      c9bcfc63
  22. 19 9月, 2016 1 次提交
  23. 08 7月, 2016 1 次提交
  24. 06 7月, 2016 1 次提交
  25. 04 7月, 2016 1 次提交
  26. 16 6月, 2016 2 次提交
  27. 14 6月, 2016 1 次提交
  28. 03 6月, 2016 2 次提交
  29. 10 5月, 2016 1 次提交
    • J
      Remove the annotate gem and delete old annotations · f1479b56
      Jeroen van Baarsen 提交于
      In 8278b763 the default behaviour of annotation
      has changes, which was causing a lot of noise in diffs. We decided in #17382
      that it is better to get rid of the whole annotate gem, and instead let people
      look at schema.rb for the columns in a table.
      
      Fixes: #17382
      f1479b56
  30. 25 4月, 2016 1 次提交
  31. 25 3月, 2016 2 次提交
  32. 18 3月, 2016 1 次提交
  33. 27 1月, 2016 1 次提交
    • Y
      Use Atom update times of the first event · de7c9c7a
      Yorick Peterse 提交于
      By simply loading the first event from the already sorted set we save
      ourselves extra (slow) queries just to get the latest update timestamp.
      This removes the need for Event.latest_update_time and significantly
      reduces the time needed to build an Atom feed.
      
      Fixes gitlab-org/gitlab-ce#12415
      de7c9c7a
  34. 09 12月, 2015 1 次提交
  35. 18 11月, 2015 2 次提交
    • Y
      Added Event.limit_recent · 01620dd7
      Yorick Peterse 提交于
      This will be used to move some querying logic from the users controller
      to the Event model (where it belongs).
      01620dd7
    • Y
      Faster way of obtaining latest event update time · 054f2f98
      Yorick Peterse 提交于
      Instead of using MAX(events.updated_at) we can simply sort the events in
      descending order by the "id" column and grab the first row. In other
      words, instead of this:
      
          SELECT max(events.updated_at) AS max_id
          FROM events
          LEFT OUTER JOIN projects   ON projects.id   = events.project_id
          LEFT OUTER JOIN namespaces ON namespaces.id = projects.namespace_id
          WHERE events.author_id IS NOT NULL
          AND events.project_id IN (13083);
      
      we can use this:
      
          SELECT events.updated_at AS max_id
          FROM events
          LEFT OUTER JOIN projects   ON projects.id   = events.project_id
          LEFT OUTER JOIN namespaces ON namespaces.id = projects.namespace_id
          WHERE events.author_id IS NOT NULL
          AND events.project_id IN (13083)
          ORDER BY events.id DESC
          LIMIT 1;
      
      This has the benefit that on PostgreSQL a backwards index scan can be
      used, which due to the "LIMIT 1" will at most process only a single row.
      This in turn greatly speeds up the process of grabbing the latest update
      time. This can be confirmed by looking at the query plans. The first
      query produces the following plan:
      
          Aggregate  (cost=43779.84..43779.85 rows=1 width=12) (actual time=2142.462..2142.462 rows=1 loops=1)
            ->  Index Scan using index_events_on_project_id on events  (cost=0.43..43704.69 rows=30060 width=12) (actual time=0.033..2138.086 rows=32769 loops=1)
                  Index Cond: (project_id = 13083)
                  Filter: (author_id IS NOT NULL)
          Planning time: 1.248 ms
          Execution time: 2142.548 ms
      
      The second query in turn produces the following plan:
      
          Limit  (cost=0.43..41.65 rows=1 width=16) (actual time=1.394..1.394 rows=1 loops=1)
            ->  Index Scan Backward using events_pkey on events  (cost=0.43..1238907.96 rows=30060 width=16) (actual time=1.394..1.394 rows=1 loops=1)
                  Filter: ((author_id IS NOT NULL) AND (project_id = 13083))
                  Rows Removed by Filter: 2104
          Planning time: 0.166 ms
          Execution time: 1.408 ms
      
      According to the above plans the 2nd query is around 1500 times faster.
      However, re-running the first query produces timings of around 80 ms,
      making the 2nd query "only" around 55 times faster.
      054f2f98