“870d2e4dcdbd3035c560ba5c46677da3ed7e06ef”上不存在“legacy/ctr/README.cn.md”
  1. 19 11月, 2016 3 次提交
    • P
      arm64: Allow hw watchpoint of length 3,5,6 and 7 · 0ddb8e0b
      Pratyush Anand 提交于
      Since, arm64 can support all offset within a double word limit. Therefore,
      now support other lengths within that range as well.
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0ddb8e0b
    • P
      arm64: hw_breakpoint: Handle inexact watchpoint addresses · fdfeff0f
      Pavel Labath 提交于
      Arm64 hardware does not always report a watchpoint hit address that
      matches one of the watchpoints set. It can also report an address
      "near" the watchpoint if a single instruction access both watched and
      unwatched addresses. There is no straight-forward way, short of
      disassembling the offending instruction, to map that address back to
      the watchpoint.
      
      Previously, when the hardware reported a watchpoint hit on an address
      that did not match our watchpoint (this happens in case of instructions
      which access large chunks of memory such as "stp") the process would
      enter a loop where we would be continually resuming it (because we did
      not recognise that watchpoint hit) and it would keep hitting the
      watchpoint again and again. The tracing process would never get
      notified of the watchpoint hit.
      
      This commit fixes the problem by looking at the watchpoints near the
      address reported by the hardware. If the address does not exactly match
      one of the watchpoints we have set, it attributes the hit to the
      nearest watchpoint we have.  This heuristic is a bit dodgy, but I don't
      think we can do much more, given the hardware limitations.
      Signed-off-by: NPavel Labath <labath@google.com>
      [panand: reworked to rebase on his patches]
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      [will: use __ffs instead of ffs - 1]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fdfeff0f
    • P
      arm64: Allow hw watchpoint at varied offset from base address · b08fb180
      Pratyush Anand 提交于
      ARM64 hardware supports watchpoint at any double word aligned address.
      However, it can select any consecutive bytes from offset 0 to 7 from that
      base address. For example, if base address is programmed as 0x420030 and
      byte select is 0x1C, then access of 0x420032,0x420033 and 0x420034 will
      generate a watchpoint exception.
      
      Currently, we do not have such modularity. We can only program byte,
      halfword, word and double word access exception from any base address.
      
      This patch adds support to overcome above limitations.
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b08fb180
  2. 01 9月, 2016 1 次提交
  3. 19 7月, 2016 1 次提交
  4. 15 4月, 2016 1 次提交
  5. 31 3月, 2016 1 次提交
    • W
      perf/core: Set event's default ::overflow_handler() · 1879445d
      Wang Nan 提交于
      Set a default event->overflow_handler in perf_event_alloc() so don't
      need to check event->overflow_handler in __perf_event_overflow().
      Following commits can give a different default overflow_handler.
      
      Initial idea comes from Peter:
      
        http://lkml.kernel.org/r/20130708121557.GA17211@twins.programming.kicks-ass.net
      
      Since the default value of event->overflow_handler is not NULL, existing
      'if (!overflow_handler)' checks need to be changed.
      
      is_default_overflow_handler() is introduced for this.
      
      No extra performance overhead is introduced into the hot path because in the
      original code we still need to read this handler from memory. A conditional
      branch is avoided so actually we remove some instructions.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <pi3orama@163.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Zefan Li <lizefan@huawei.com>
      Link: http://lkml.kernel.org/r/1459147292-239310-3-git-send-email-wangnan0@huawei.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1879445d
  6. 12 10月, 2015 1 次提交
  7. 07 10月, 2015 1 次提交
    • W
      arm64: hw_breakpoint: use target state to determine ABI behaviour · 8f48c062
      Will Deacon 提交于
      The arm64 hw_breakpoint interface is slightly less flexible than its
      32-bit counterpart, thanks to some changes in the architecture rendering
      unaligned watchpoint addresses obselete for AArch64.
      
      However, in a multi-arch environment (i.e. debugging a 32-bit target
      with a 64-bit GDB under a 64-bit kernel), we need to provide a feature
      compatible interface to GDB in order for debugging to function correctly.
      
      This patch adds a new helper, is_compat_bp,  to our hw_breakpoint
      implementation which changes the interface behaviour based on the
      architecture of the debug target as opposed to the debugger itself.
      This allows debugged to function as expected for multi-arch
      configurations without relying on deprecated architectural behaviours
      when debugging native applications.
      
      Cc: Yao Qi <yao.qi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8f48c062
  8. 17 9月, 2015 1 次提交
  9. 28 7月, 2015 1 次提交
    • W
      arm64: debug: rename enum debug_el to avoid symbol collision · 6f883d10
      Will Deacon 提交于
      lib/list_sort.c defines a 'struct debug_el', where "el" is assumedly a
      a contraction of "element". This conflicts with 'enum debug_el' in our
      asm/debug-monitors.h header file, where "el" stands for Exception Level.
      
      The result is build failure when targetting allmodconfig, so rename our
      enum to 'dbg_active_el' to be slightly more explicit about what it is.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6f883d10
  10. 21 7月, 2015 1 次提交
  11. 27 6月, 2015 1 次提交
  12. 23 3月, 2015 1 次提交
  13. 27 1月, 2015 1 次提交
    • L
      arm64: kernel: remove ARM64_CPU_SUSPEND config option · af3cfdbf
      Lorenzo Pieralisi 提交于
      ARM64_CPU_SUSPEND config option was introduced to make code providing
      context save/restore selectable only on platforms requiring power
      management capabilities.
      
      Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which
      in turn is set by the SUSPEND config option.
      
      The introduction of CPU_IDLE for arm64 requires that code configured
      by ARM64_CPU_SUSPEND (context save/restore) should be compiled in
      in order to enable the CPU idle driver to rely on CPU operations
      carrying out context save/restore.
      
      The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore
      forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP)
      failed dependencies, which is not a clean way of handling the kernel
      configuration option.
      
      For these reasons, this patch removes the ARM64_CPU_SUSPEND config option
      and makes the context save/restore dependent on CPU_PM, which is selected
      whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies
      in the process.
      
      This way, code previously configured through ARM64_CPU_SUSPEND is
      compiled in whenever a power management subsystem requires it to be
      present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour
      expected on ARM64 kernels.
      
      The cpu_suspend and cpu_init_idle CPU operations are added only if
      CPU_IDLE is selected, since they are CPU_IDLE specific methods and
      should be grouped and defined accordingly.
      
      PSCI CPU operations are updated to reflect the introduced changes.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      af3cfdbf
  14. 12 5月, 2014 1 次提交
  15. 20 3月, 2014 1 次提交
    • S
      arm64, hw_breakpoint.c: Fix CPU hotplug callback registration · 3d0dc643
      Srivatsa S. Bhat 提交于
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Instead, the correct and race-free way of performing the callback
      registration is:
      
      	cpu_notifier_register_begin();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	/* Note the use of the double underscored version of the API */
      	__register_cpu_notifier(&foobar_cpu_notifier);
      
      	cpu_notifier_register_done();
      
      Fix the hw-breakpoint code in arm64 by using this latter form of callback
      registration.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      3d0dc643
  16. 11 1月, 2014 1 次提交
    • L
      arm64: kernel: restore HW breakpoint registers in cpu_suspend · 65c021bb
      Lorenzo Pieralisi 提交于
      When a CPU resumes from low-power, it restores HW breakpoint and
      watchpoint slots through a CPU PM notifier. Since we want to enable
      debugging as early as possible in the resume path, the mdscr content
      is restored along the general purpose registers in the cpu_suspend API
      and debug exceptions are reenabled when cpu_suspend returns. Since the
      CPU PM notifier is run after a CPU has been resumed, we cannot expect
      HW breakpoint registers to contain sane values till the notifier is run,
      since the HW breakpoints registers content is unknown at reset; this means
      that the CPU might run with debug exceptions enabled, mdscr restored but HW
      breakpoint registers containing junk values that can trigger spurious
      debug exceptions.
      
      This patch fixes current HW breakpoints restore by moving the HW breakpoints
      registers restoration to the cpu_suspend API, before the debug exceptions are
      enabled. This way, as soon as the cpu_suspend function returns the
      kernel can resume debugging with sane values in HW breakpoint registers.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      65c021bb
  17. 17 12月, 2013 2 次提交
  18. 29 10月, 2013 1 次提交
  19. 15 7月, 2013 1 次提交
    • P
      arm64: delete __cpuinit usage from all users · b8c6453a
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/arm64 uses of the __cpuinit macros from
      all C files.  Currently arm64 does not have any __CPUINIT used in
      assembly files.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      b8c6453a
  20. 17 9月, 2012 1 次提交