1. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  2. 12 10月, 2015 2 次提交
  3. 10 10月, 2015 1 次提交
    • Y
      arm64: fix a migrating irq bug when hotplug cpu · 217d453d
      Yang Yingliang 提交于
      When cpu is disabled, all irqs will be migratged to another cpu.
      In some cases, a new affinity is different, the old affinity need
      to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
      the old affinity can not be updated. Fix it by using irq_do_set_affinity.
      
      And migrating interrupts is a core code matter, so use the generic
      function irq_migrate_all_off_this_cpu() to migrate interrupts in
      kernel/irq/migration.c.
      
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      217d453d
  4. 07 10月, 2015 7 次提交
  5. 02 10月, 2015 1 次提交
    • L
      arm64: ftrace: fix function_graph tracer panic · ee556d00
      Li Bin 提交于
      When function graph tracer is enabled, the following operation
      will trigger panic:
      
      mount -t debugfs nodev /sys/kernel
      echo next_tgid > /sys/kernel/tracing/set_ftrace_filter
      echo function_graph > /sys/kernel/tracing/current_tracer
      ls /proc/
      
      ------------[ cut here ]------------
      [  198.501417] Unable to handle kernel paging request at virtual address cb88537fdc8ba316
      [  198.506126] pgd = ffffffc008f79000
      [  198.509363] [cb88537fdc8ba316] *pgd=00000000488c6003, *pud=00000000488c6003, *pmd=0000000000000000
      [  198.517726] Internal error: Oops: 94000005 [#1] SMP
      [  198.518798] Modules linked in:
      [  198.520582] CPU: 1 PID: 1388 Comm: ls Tainted: G
      [  198.521800] Hardware name: linux,dummy-virt (DT)
      [  198.522852] task: ffffffc0fa9e8000 ti: ffffffc0f9ab0000 task.ti: ffffffc0f9ab0000
      [  198.524306] PC is at next_tgid+0x30/0x100
      [  198.525205] LR is at return_to_handler+0x0/0x20
      [  198.526090] pc : [<ffffffc0002a1070>] lr : [<ffffffc0000907c0>] pstate: 60000145
      [  198.527392] sp : ffffffc0f9ab3d40
      [  198.528084] x29: ffffffc0f9ab3d40 x28: ffffffc0f9ab0000
      [  198.529406] x27: ffffffc000d6a000 x26: ffffffc000b786e8
      [  198.530659] x25: ffffffc0002a1900 x24: ffffffc0faf16c00
      [  198.531942] x23: ffffffc0f9ab3ea0 x22: 0000000000000002
      [  198.533202] x21: ffffffc000d85050 x20: 0000000000000002
      [  198.534446] x19: 0000000000000002 x18: 0000000000000000
      [  198.535719] x17: 000000000049fa08 x16: ffffffc000242efc
      [  198.537030] x15: 0000007fa472b54c x14: ffffffffff000000
      [  198.538347] x13: ffffffc0fada84a0 x12: 0000000000000001
      [  198.539634] x11: ffffffc0f9ab3d70 x10: ffffffc0f9ab3d70
      [  198.540915] x9 : ffffffc0000907c0 x8 : ffffffc0f9ab3d40
      [  198.542215] x7 : 0000002e330f08f0 x6 : 0000000000000015
      [  198.543508] x5 : 0000000000000f08 x4 : ffffffc0f9835ec0
      [  198.544792] x3 : cb88537fdc8ba316 x2 : cb88537fdc8ba306
      [  198.546108] x1 : 0000000000000002 x0 : ffffffc000d85050
      [  198.547432]
      [  198.547920] Process ls (pid: 1388, stack limit = 0xffffffc0f9ab0020)
      [  198.549170] Stack: (0xffffffc0f9ab3d40 to 0xffffffc0f9ab4000)
      [  198.582568] Call trace:
      [  198.583313] [<ffffffc0002a1070>] next_tgid+0x30/0x100
      [  198.584359] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
      [  198.585503] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
      [  198.586574] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
      [  198.587660] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
      [  198.588896] Code: aa0003f5 2a0103f4 b4000102 91004043 (885f7c60)
      [  198.591092] ---[ end trace 6a346f8f20949ac8 ]---
      
      This is because when using function graph tracer, if the traced
      function return value is in multi regs ([x0-x7]), return_to_handler
      may corrupt them. So in return_to_handler, the parameter regs should
      be protected properly.
      
      Cc: <stable@vger.kernel.org> # 3.18+
      Signed-off-by: NLi Bin <huawei.libin@huawei.com>
      Acked-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ee556d00
  6. 01 10月, 2015 1 次提交
    • A
      arm64/efi: Fix boot crash by not padding between EFI_MEMORY_RUNTIME regions · 0ce3cc00
      Ard Biesheuvel 提交于
      The new Properties Table feature introduced in UEFIv2.5 may
      split memory regions that cover PE/COFF memory images into
      separate code and data regions. Since these regions only differ
      in the type (runtime code vs runtime data) and the permission
      bits, but not in the memory type attributes (UC/WC/WT/WB), the
      spec does not require them to be aligned to 64 KB.
      
      Since the relative offset of PE/COFF .text and .data segments
      cannot be changed on the fly, this means that we can no longer
      pad out those regions to be mappable using 64 KB pages.
      Unfortunately, there is no annotation in the UEFI memory map
      that identifies data regions that were split off from a code
      region, so we must apply this logic to all adjacent runtime
      regions whose attributes only differ in the permission bits.
      
      So instead of rounding each memory region to 64 KB alignment at
      both ends, only round down regions that are not directly
      preceded by another runtime region with the same type
      attributes. Since the UEFI spec does not mandate that the memory
      map be sorted, this means we also need to sort it first.
      
      Note that this change will result in all EFI_MEMORY_RUNTIME
      regions whose start addresses are not aligned to the OS page
      size to be mapped with executable permissions (i.e., on kernels
      compiled with 64 KB pages). However, since these mappings are
      only active during the time that UEFI Runtime Services are being
      invoked, the window for abuse is rather small.
      Tested-by: NMark Salter <msalter@redhat.com>
      Tested-by: Mark Rutland <mark.rutland@arm.com> [UEFI 2.4 only]
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Reviewed-by: NMark Salter <msalter@redhat.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Cc: <stable@vger.kernel.org> # v4.0+
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/1443218539-7610-3-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0ce3cc00
  7. 17 9月, 2015 3 次提交
    • W
      arm64: errata: add module build workaround for erratum #843419 · df057cc7
      Will Deacon 提交于
      Cortex-A53 processors <= r0p4 are affected by erratum #843419 which can
      lead to a memory access using an incorrect address in certain sequences
      headed by an ADRP instruction.
      
      There is a linker fix to generate veneers for ADRP instructions, but
      this doesn't work for kernel modules which are built as unlinked ELF
      objects.
      
      This patch adds a new config option for the erratum which, when enabled,
      builds kernel modules with the mcmodel=large flag. This uses absolute
      addressing for all kernel symbols, thereby removing the use of ADRP as
      a PC-relative form of addressing. The ADRP relocs are removed from the
      module loader so that we fail to load any potentially affected modules.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      df057cc7
    • W
      arm64: compat: fix vfp save/restore across signal handlers in big-endian · bdec97a8
      Will Deacon 提交于
      When saving/restoring the VFP registers from a compat (AArch32)
      signal frame, we rely on the compat registers forming a prefix of the
      native register file and therefore make use of copy_{to,from}_user to
      transfer between the native fpsimd_state and the compat_vfp_sigframe.
      
      Unfortunately, this doesn't work so well in a big-endian environment.
      Our fpsimd save/restore code operates directly on 128-bit quantities
      (Q registers) whereas the compat_vfp_sigframe represents the registers
      as an array of 64-bit (D) registers. The architecture packs the compat D
      registers into the Q registers, with the least significant bytes holding
      the lower register. Consequently, we need to swap the 64-bit halves when
      converting between these two representations on a big-endian machine.
      
      This patch replaces the __copy_{to,from}_user invocations in our
      compat VFP signal handling code with explicit __put_user loops that
      operate on 64-bit values and swap them accordingly.
      
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bdec97a8
    • W
      arm64: cpu hotplug: ensure we mask out CPU_TASKS_FROZEN in notifiers · e56d82a1
      Will Deacon 提交于
      We have a couple of CPU hotplug notifiers for resetting the CPU debug
      state to a sane value when a CPU comes online.
      
      This patch ensures that we mask out CPU_TASKS_FROZEN so that we don't
      miss any online events occuring due to suspend/resume.
      Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e56d82a1
  8. 15 9月, 2015 1 次提交
  9. 09 9月, 2015 1 次提交
  10. 27 8月, 2015 1 次提交
    • A
      arm64: flush FP/SIMD state correctly after execve() · 674c242c
      Ard Biesheuvel 提交于
      When a task calls execve(), its FP/SIMD state is flushed so that
      none of the original program state is observeable by the incoming
      program.
      
      However, since this flushing consists of setting the in-memory copy
      of the FP/SIMD state to all zeroes, the CPU field is set to CPU 0 as
      well, which indicates to the lazy FP/SIMD preserve/restore code that
      the FP/SIMD state does not need to be reread from memory if the task
      is scheduled again on CPU 0 without any other tasks having entered
      userland (or used the FP/SIMD in kernel mode) on the same CPU in the
      mean time. If this happens, the FP/SIMD state of the old program will
      still be present in the registers when the new program starts.
      
      So set the CPU field to the invalid value of NR_CPUS when performing
      the flush, by calling fpsimd_flush_task_state().
      
      Cc: <stable@vger.kernel.org>
      Reported-by: NChunyan Zhang <chunyan.zhang@spreadtrum.com>
      Reported-by: NJanet Liu <janet.liu@spreadtrum.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      674c242c
  11. 24 8月, 2015 1 次提交
  12. 21 8月, 2015 1 次提交
    • W
      arm64: entry: always restore x0 from the stack on syscall return · 412fcb6c
      Will Deacon 提交于
      We have a micro-optimisation on the fast syscall return path where we
      take care to keep x0 live with the return value from the syscall so that
      we can avoid restoring it from the stack. The benefit of doing this is
      fairly suspect, since we will be restoring x1 from the stack anyway
      (which lives adjacent in the pt_regs structure) and the only additional
      cost is saving x0 back to pt_regs after the syscall handler, which could
      be seen as a poor man's prefetch.
      
      More importantly, this causes issues with the context tracking code.
      
      The ct_user_enter macro ends up branching into C code, which is free to
      use x0 as a scratch register and consequently leads to us returning junk
      back to userspace as the syscall return value. Rather than special case
      the context-tracking code, this patch removes the questionable
      optimisation entirely.
      
      Cc: <stable@vger.kernel.org>
      Cc: Larry Bassel <larry.bassel@linaro.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NHanjun Guo <hanjun.guo@linaro.org>
      Tested-by: NHanjun Guo <hanjun.guo@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      412fcb6c
  13. 12 8月, 2015 1 次提交
  14. 10 8月, 2015 1 次提交
    • N
      arm64: VDSO: fix coarse clock monotonicity regression · 878854a3
      Nathan Lynch 提交于
      Since 906c5557 ("timekeeping: Copy the shadow-timekeeper over the
      real timekeeper last") it has become possible on arm64 to:
      
      - Obtain a CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE timestamp
        via syscall.
      - Subsequently obtain a timestamp for the same clock ID via VDSO which
        predates the first timestamp (by one jiffy).
      
      This is because arm64's update_vsyscall is deriving the coarse time
      using the __current_kernel_time interface, when it should really be
      using the timekeeper object provided to it by the timekeeping core.
      It happened to work before only because __current_kernel_time would
      access the same timekeeper object which had been passed to
      update_vsyscall.  This is no longer the case.
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      878854a3
  15. 07 8月, 2015 2 次提交
    • A
      signal: fix information leak in copy_siginfo_to_user · 26135022
      Amanieu d'Antras 提交于
      This function may copy the si_addr_lsb, si_lower and si_upper fields to
      user mode when they haven't been initialized, which can leak kernel
      stack data to user mode.
      
      Just checking the value of si_code is insufficient because the same
      si_code value is shared between multiple signals.  This is solved by
      checking the value of si_signo in addition to si_code.
      Signed-off-by: NAmanieu d'Antras <amanieu@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26135022
    • A
      signal: fix information leak in copy_siginfo_from_user32 · 3c00cb5e
      Amanieu d'Antras 提交于
      This function can leak kernel stack data when the user siginfo_t has a
      positive si_code value.  The top 16 bits of si_code descibe which fields
      in the siginfo_t union are active, but they are treated inconsistently
      between copy_siginfo_from_user32, copy_siginfo_to_user32 and
      copy_siginfo_to_user.
      
      copy_siginfo_from_user32 is called from rt_sigqueueinfo and
      rt_tgsigqueueinfo in which the user has full control overthe top 16 bits
      of si_code.
      
      This fixes the following information leaks:
      x86:   8 bytes leaked when sending a signal from a 32-bit process to
             itself. This leak grows to 16 bytes if the process uses x32.
             (si_code = __SI_CHLD)
      x86:   100 bytes leaked when sending a signal from a 32-bit process to
             a 64-bit process. (si_code = -1)
      sparc: 4 bytes leaked when sending a signal from a 32-bit process to a
             64-bit process. (si_code = any)
      
      parsic and s390 have similar bugs, but they are not vulnerable because
      rt_[tg]sigqueueinfo have checks that prevent sending a positive si_code
      to a different process.  These bugs are also fixed for consistency.
      Signed-off-by: NAmanieu d'Antras <amanieu@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3c00cb5e
  16. 05 8月, 2015 2 次提交
    • W
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon 提交于
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8ec41987
    • W
      arm64: alternatives: ensure secondary CPUs execute ISB after patching · 04b8637b
      Will Deacon 提交于
      In order to guarantee that the patched instruction stream is visible to
      a CPU, that CPU must execute an isb instruction after any related cache
      maintenance has completed.
      
      The instruction patching routines in kernel/insn.c get this right for
      things like jump labels and ftrace, but the alternatives patching omits
      it entirely leaving secondary cores in a potential limbo between the old
      and the new code.
      
      This patch adds an isb following the secondary polling loop in the
      altenatives patching.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      04b8637b
  17. 03 8月, 2015 2 次提交
  18. 01 8月, 2015 1 次提交
    • S
      arm64: restore cpu suspend/resume functionality · b511a659
      Sudeep Holla 提交于
      Commit 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
      accidentally retained code for !CONFIG_SMP in cpu_resume function. This
      resulted in the hash index being zeroed in x7 after proper computation,
      which is then used to get the cpu context pointer while resuming.
      
      This patch removes the remanant code and restores back the cpu suspend/
      resume functionality.
      
      Fixes: 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b511a659
  19. 31 7月, 2015 2 次提交
    • L
      ARM64: PCI: do not enable resources on PROBE_ONLY systems · 72407514
      Lorenzo Pieralisi 提交于
      On ARM64 PROBE_ONLY PCI systems resources are not currently claimed,
      therefore they can't be enabled since they do not have a valid
      parent pointer; this in turn prevents enabling PCI devices on
      ARM64 PROBE_ONLY systems, causing PCI devices initialization to
      fail.
      
      To solve this issue, resources must be claimed when devices are
      added on PROBE_ONLY systems, which ensures that the resource hierarchy
      is validated and the resource tree is sane, but this requires changes
      in the ARM64 resource management that can affect adversely existing
      PCI set-ups (claiming resources on !PROBE_ONLY systems might break
      existing ARM64 PCI platform implementations).
      
      As a temporary solution in preparation for a proper resources claiming
      implementation in ARM64 core, to enable PCI PROBE_ONLY systems on ARM64,
      this patch adds a pcibios_enable_device() arch implementation that
      simply prevents enabling resources on PROBE_ONLY systems (mirroring ARM
      behaviour).
      
      This is always a safe thing to do because on PROBE_ONLY systems the
      configuration space set-up can be considered immutable, and it is in
      preparation of proper resource claiming that would finally validate
      the PCI resources tree in the ARM64 arch implementation on PROBE_ONLY
      systems.
      
      For !PROBE_ONLY systems resources enablement in pcibios_enable_device()
      on ARM64 is implemented as in current PCI core, leaving the behaviour
      unchanged.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      72407514
    • W
      arm64: alternative: put secondary CPUs into polling loop during patch · ef5e724b
      Will Deacon 提交于
      When patching the kernel text with alternatives, we may end up patching
      parts of the stop_machine state machine (e.g. atomic_dec_and_test in
      ack_state) and consequently corrupt the instruction stream of any
      secondary CPUs.
      
      This patch passes the cpu_online_mask to stop_machine, forcing all of
      the CPUs into our own callback which can place the secondary cores into
      a dumb (but safe!) polling loop whilst the patching is carried out.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ef5e724b
  20. 30 7月, 2015 2 次提交
  21. 28 7月, 2015 2 次提交
  22. 27 7月, 2015 4 次提交