1. 10 9月, 2010 14 次提交
    • P
      perf: Per-pmu-per-cpu contexts · 108b02cf
      Peter Zijlstra 提交于
      Allocate per-cpu contexts per pmu.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      108b02cf
    • P
      perf: Per cpu-context rotation timer · b5ab4cd5
      Peter Zijlstra 提交于
      Give each cpu-context its own timer so that it is a self contained
      entity, this eases the way for per-pmu-per-cpu contexts as well as
      provides the basic infrastructure to allow different rotation
      times per pmu.
      
      Things to look at:
       - folding the tick and these TICK_NSEC timers
       - separate task context rotation
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b5ab4cd5
    • P
      perf: Remove the swevent hash-table from the cpu context · b28ab83c
      Peter Zijlstra 提交于
      Separate the swevent hash-table from the cpu_context bits in
      preparation for per pmu cpu contexts.
      
      This keeps the swevent hash a global entity.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b28ab83c
    • P
      perf: Separate find_get_context() from event initialization · c3f00c70
      Peter Zijlstra 提交于
      Separate find_get_context() from the event allocation and
      initialization so that we may make find_get_context() depend
      on the event pmu in a later patch.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c3f00c70
    • P
      perf: Remove the sysfs bits · 15ac9a39
      Peter Zijlstra 提交于
      Neither the overcommit nor the reservation sysfs parameter were
      actually working, remove them as they'll only get in the way.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      15ac9a39
    • P
      perf: Rework the PMU methods · a4eaf7f1
      Peter Zijlstra 提交于
      Replace pmu::{enable,disable,start,stop,unthrottle} with
      pmu::{add,del,start,stop}, all of which take a flags argument.
      
      The new interface extends the capability to stop a counter while
      keeping it scheduled on the PMU. We replace the throttled state with
      the generic stopped state.
      
      This also allows us to efficiently stop/start counters over certain
      code paths (like IRQ handlers).
      
      It also allows scheduling a counter without it starting, allowing for
      a generic frozen state (useful for rotating stopped counters).
      
      The stopped state is implemented in two different ways, depending on
      how the architecture implemented the throttled state:
      
       1) We disable the counter:
          a) the pmu has per-counter enable bits, we flip that
          b) we program a NOP event, preserving the counter state
      
       2) We store the counter state and ignore all read/overflow events
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4eaf7f1
    • P
      perf: Shrink hw_perf_event · fa407f35
      Peter Zijlstra 提交于
      Use hw_perf_event::period_left instead of hw_perf_event::remaining
      and win back 8 bytes.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa407f35
    • P
      perf: Default PMU ops · ad5133b7
      Peter Zijlstra 提交于
      Provide default implementations for the pmu txn methods, this
      allows us to remove some conditional code.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ad5133b7
    • P
      perf: Per PMU disable · 33696fc0
      Peter Zijlstra 提交于
      Changes perf_disable() into perf_pmu_disable().
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      33696fc0
    • P
      perf: Reduce perf_disable() usage · 24cd7f54
      Peter Zijlstra 提交于
      Since the current perf_disable() usage is only an optimization,
      remove it for now. This eases the removal of the __weak
      hw_perf_enable() interface.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      24cd7f54
    • P
      perf: Unindent labels · 9ed6060d
      Peter Zijlstra 提交于
      Fixup random annoying style bits.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9ed6060d
    • P
      perf: Register PMU implementations · b0a873eb
      Peter Zijlstra 提交于
      Simple registration interface for struct pmu, this provides the
      infrastructure for removing all the weak functions.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b0a873eb
    • P
      perf: Deconstify struct pmu · 51b0fe39
      Peter Zijlstra 提交于
      sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"`
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51b0fe39
    • P
      perf: Fix CPU hotplug · 5e11637e
      Peter Zijlstra 提交于
      Since we have UP_PREPARE, we should also have UP_CANCELED.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5e11637e
  2. 30 8月, 2010 1 次提交
    • S
      perf_events: Fix time tracking for events with pid != -1 and cpu != -1 · fa66f07a
      Stephane Eranian 提交于
      Per-thread events with a cpu filter, i.e., cpu != -1, were not
      reporting correct timings when the thread never ran on the
      monitored cpu. The time enabled was reported as a negative
      value.
      
      This patch fixes the problem by updating tstamp_stopped,
      tstamp_running in event_sched_out() for events with filters and
      which are marked as INACTIVE.
      
      The function group_sched_out() is modified to systematically
      call into event_sched_out() to avoid duplicating the timing
      adjustment code twice.
      
      With the patch, I now get:
      
      $ task_cpu -i -e unhalted_core_cycles,unhalted_core_cycles
      noploop 2 noploop for 2 seconds
      CPU0 0		   unhalted_core_cycles (ena=1,991,136,594, run=0)
      CPU0 0		   unhalted_core_cycles (ena=1,991,136,594, run=0)
      
      CPU1 0		   unhalted_core_cycles (ena=1,991,136,594, run=0)
      CPU1 0		   unhalted_core_cycles (ena=1,991,136,594, run=0)
      
      CPU2 0		   unhalted_core_cycles (ena=1,991,136,594, run=0)
      CPU2 0		   unhalted_core_cycles (ena=1,991,136,594, run=0)
      
      CPU3 4,747,990,931 unhalted_core_cycles (ena=1,991,136,594, run=1,991,136,594)
      CPU3 4,747,990,931 unhalted_core_cycles (ena=1,991,136,594, run=1,991,136,594)
      Signed-off-by: NStephane Eranian <eranian@gmail.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus@samba.org
      Cc: davem@davemloft.net
      Cc: fweisbec@gmail.com
      Cc: perfmon2-devel@lists.sf.net
      Cc: eranian@google.com
      LKML-Reference: <4c76802d.aae9d80a.115d.70fe@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa66f07a
  3. 19 8月, 2010 4 次提交
    • F
      perf: Humanize the number of contexts · 7ae07ea3
      Frederic Weisbecker 提交于
      Instead of hardcoding the number of contexts for the recursions
      barriers, define a cpp constant to make the code more
      self-explanatory.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      7ae07ea3
    • F
      perf: Fix race in callchains · 927c7a9e
      Frederic Weisbecker 提交于
      Now that software events don't have interrupt disabled anymore in
      the event path, callchains can nest on any context. So seperating
      nmi and others contexts in two buffers has become racy.
      
      Fix this by providing one buffer per nesting level. Given the size
      of the callchain entries (2040 bytes * 4), we now need to allocate
      them dynamically.
      
      v2: Fixed put_callchain_entry call after recursion.
          Fix the type of the recursion, it must be an array.
      
      v3: Use a manual pr cpu allocation (temporary solution until NMIs
          can safely access vmalloc'ed memory).
          Do a better separation between callchain reference tracking and
          allocation. Make the "put" path lockless for non-release cases.
      
      v4: Protect the callchain buffers with rcu.
      
      v5: Do the cpu buffers allocations node affine.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Borislav Petkov <bp@amd64.org>
      927c7a9e
    • F
      perf: Factorize callchain context handling · f72c1a93
      Frederic Weisbecker 提交于
      Store the kernel and user contexts from the generic layer instead
      of archs, this gathers some repetitive code.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Borislav Petkov <bp@amd64.org>
      f72c1a93
    • F
      perf: Generalize some arch callchain code · 56962b44
      Frederic Weisbecker 提交于
      - Most archs use one callchain buffer per cpu, except x86 that needs
        to deal with NMIs. Provide a default perf_callchain_buffer()
        implementation that x86 overrides.
      
      - Centralize all the kernel/user regs handling and invoke new arch
        handlers from there: perf_callchain_user() / perf_callchain_kernel()
        That avoid all the user_mode(), current->mm checks and so...
      
      - Invert some parameters in perf_callchain_*() helpers: entry to the
        left, regs to the right, following the traditional (dst, src).
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Borislav Petkov <bp@amd64.org>
      56962b44
  4. 09 6月, 2010 12 次提交
  5. 03 6月, 2010 1 次提交
    • P
      perf: Fix crash in swevents · c6df8d5a
      Peter Zijlstra 提交于
      Frederic reported that because swevents handling doesn't disable IRQs
      anymore, we can get a recursion of perf_adjust_period(), once from
      overflow handling and once from the tick.
      
      If both call ->disable, we get a double hlist_del_rcu() and trigger
      a LIST_POISON2 dereference.
      
      Since we don't actually need to stop/start a swevent to re-programm
      the hardware (lack of hardware to program), simply nop out these
      callbacks for the swevent pmu.
      Reported-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1275557609.27810.35218.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c6df8d5a
  6. 31 5月, 2010 4 次提交
    • F
      perf_events: Fix unincremented buffer base on partial copy · 74048f89
      Frederic Weisbecker 提交于
      If a sample size crosses to the next page boundary, the copy
      will be made in more than one step. However we forget to advance
      the source offset for the next copy, leading to unexpected double
      copies that completely mess up the traces.
      
      This fixes various kinds of bad traces that have irrelevant
      data inside, as an example:
      
      	geany-4979  [001]  5758.077775: sched_switch: prev_comm=! prev_pid=121
      		prev_prio=0 prev_state=S|D|Z|X|x ==> next_comm= next_pid=7497072
      		next_prio=0
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1274988898-5639-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74048f89
    • S
      perf_events: Fix event scheduling issues introduced by transactional API · 90151c35
      Stephane Eranian 提交于
      The transactional API patch between the generic and model-specific
      code introduced several important bugs with event scheduling, at
      least on X86. If you had pinned events, e.g., watchdog,  and were
      over-committing the PMU, you would get bogus counts. The bug was
      showing up on Intel CPU because events would move around more
      often that on AMD. But the problem also existed on AMD, though
      harder to expose.
      
      The issues were:
      
       - group_sched_in() was missing a cancel_txn() in the error path
      
       - cpuc->n_added was not properly maintained, leading to missing
         actions in hw_perf_enable(), i.e., n_running being 0. You cannot
         update n_added until you know the transaction has succeeded. In
         case of failed transaction n_added was not adjusted back.
      
       - in case of failed transactions, event_sched_out() was called
         and eventually invoked x86_disable_event() to touch the HW reg.
         But with transactions, on X86, event_sched_in() does not touch
         HW registers, it simply collects events into a list. Thus, you
         could end up calling x86_disable_event() on a counter which
         did not correspond to the current event when idx != -1.
      
      The patch modifies the generic and X86 code to avoid all those problems.
      
      First, we keep track of the number of events added last. In case the
      transaction fails, we substract them from n_added. This approach is
      necessary (as opposed to delaying updates to n_added) because not all
      event updates use the transaction API, e.g., single events.
      
      Second, we encapsulate the event_sched_in() and event_sched_out() in
      group_sched_in() inside the transaction. That makes the operations
      symmetrical and you can also detect that you are inside a transaction
      and skip the HW reg access by checking cpuc->group_flag.
      
      With this patch, you can now overcommit the PMU even with pinned
      system-wide events present and still get valid counts.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1274796225.5882.1389.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      90151c35
    • P
      perf_events: Fix races in group composition · 8a49542c
      Peter Zijlstra 提交于
      Group siblings don't pin each-other or the parent, so when we destroy
      events we must make sure to clean up all cross referencing pointers.
      
      In particular, for destruction of a group leader we must be able to
      find all its siblings and remove their reference to it.
      
      This means that detaching an event from its context must not detach it
      from the group, otherwise we can end up failing to clear all pointers.
      
      Solve this by clearly separating the attachment to a context and
      attachment to a group, and keep the group composed until we destroy
      the events.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8a49542c
    • P
      perf_events: Fix races and clean up perf_event and perf_mmap_data interaction · ac9721f3
      Peter Zijlstra 提交于
      In order to move toward separate buffer objects, rework the whole
      perf_mmap_data construct to be a more self-sufficient entity, one
      with its own lifetime rules.
      
      This greatly sanitizes the whole output redirection code, which
      was riddled with bugs and races.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@kernel.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ac9721f3
  7. 28 5月, 2010 1 次提交
  8. 21 5月, 2010 3 次提交