1. 29 4月, 2019 1 次提交
    • K
      perf/x86: Make perf callchains work without CONFIG_FRAME_POINTER · d15d3568
      Kairui Song 提交于
      Currently perf callchain doesn't work well with ORC unwinder
      when sampling from trace point. We'll get useless in kernel callchain
      like this:
      
      perf  6429 [000]    22.498450:             kmem:mm_page_alloc: page=0x176a17 pfn=1534487 order=0 migratetype=0 gfp_flags=GFP_KERNEL
          ffffffffbe23e32e __alloc_pages_nodemask+0x22e (/lib/modules/5.1.0-rc3+/build/vmlinux)
      	7efdf7f7d3e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
      	5651468729c1 [unknown] (/usr/bin/perf)
      	5651467ee82a main+0x69a (/usr/bin/perf)
      	7efdf7eaf413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
          5541f689495641d7 [unknown] ([unknown])
      
      The root cause is that, for trace point events, it doesn't provide a
      real snapshot of the hardware registers. Instead perf tries to get
      required caller's registers and compose a fake register snapshot
      which suppose to contain enough information for start a unwinding.
      However without CONFIG_FRAME_POINTER, if failed to get caller's BP as the
      frame pointer, so current frame pointer is returned instead. We get
      a invalid register combination which confuse the unwinder, and end the
      stacktrace early.
      
      So in such case just don't try dump BP, and let the unwinder start
      directly when the register is not a real snapshot. Use SP
      as the skip mark, unwinder will skip all the frames until it meet
      the frame of the trace point caller.
      
      Tested with frame pointer unwinder and ORC unwinder, this makes perf
      callchain get the full kernel space stacktrace again like this:
      
      perf  6503 [000]  1567.570191:             kmem:mm_page_alloc: page=0x16c904 pfn=1493252 order=0 migratetype=0 gfp_flags=GFP_KERNEL
          ffffffffb523e2ae __alloc_pages_nodemask+0x22e (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52383bd __get_free_pages+0xd (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52fd28a __pollwait+0x8a (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb521426f perf_poll+0x2f (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52fe3e2 do_sys_poll+0x252 (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52ff027 __x64_sys_poll+0x37 (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb500418b do_syscall_64+0x5b (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb5a0008c entry_SYSCALL_64_after_hwframe+0x44 (/lib/modules/5.1.0-rc3+/build/vmlinux)
      	7f71e92d03e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
      	55a22960d9c1 [unknown] (/usr/bin/perf)
      	55a22958982a main+0x69a (/usr/bin/perf)
      	7f71e9202413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
          5541f689495641d7 [unknown] ([unknown])
      Co-developed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NKairui Song <kasong@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20190422162652.15483-1-kasong@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d15d3568
  2. 18 4月, 2019 1 次提交
  3. 16 4月, 2019 22 次提交
    • K
      perf/x86/intel: Add Tremont core PMU support · 6daeb873
      Kan Liang 提交于
      Add perf core PMU support for Intel Tremont CPU.
      
      The init code is based on Goldmont plus.
      
      The generic purpose counter 0 and fixed counter 0 have less skid.
      Force :ppp events on generic purpose counter 0.
      Force instruction:ppp on generic purpose counter 0 and fixed counter 0.
      
      Updates LLC cache event table and OFFCORE_RESPONSE mask.
      
      Adaptive PEBS, which is already enabled on ICL, is also supported
      on Tremont. No extra code required.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/1554922629-126287-3-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6daeb873
    • K
      perf/x86/intel/uncore: Add Intel Icelake uncore support · 6e394376
      Kan Liang 提交于
      Add Intel Icelake uncore support:
      
       - The init code is based on Skylake
       - Add new PCI id for IMC
       - New MSR address for CBOX
       - Get CBOX# from CNL_UNC_CBO_CONFIG MSR directly
       - Create a new PMU for fixed clocktick counter
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-13-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6e394376
    • K
      perf/x86/msr: Add Icelake support · cf50d79a
      Kan Liang 提交于
      Icelake is the same as the existing Skylake parts.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-12-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cf50d79a
    • K
      perf/x86/intel/rapl: Add Icelake support · b3377c3a
      Kan Liang 提交于
      Icelake support the same RAPL counters as Skylake.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-11-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b3377c3a
    • K
      perf/x86/intel/cstate: Add Icelake support · f08c47d1
      Kan Liang 提交于
      Icelake uses the same C-state residency events as Sandy Bridge.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-10-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f08c47d1
    • K
      perf/x86/intel: Add Icelake support · 60176089
      Kan Liang 提交于
      Add Icelake core PMU perf code, including constraint tables and the main
      enable code.
      
      Icelake expanded the generic counters to always 8 even with HT on, but a
      range of events cannot be scheduled on the extra 4 counters.
      Add new constraint ranges to describe this to the scheduler.
      The number of constraints that need to be checked is larger now than
      with earlier CPUs.
      At some point we may need a new data structure to look them up more
      efficiently than with linear search. So far it still seems to be
      acceptable however.
      
      Icelake added a new fixed counter SLOTS. Full support for it is added
      later in the patch series.
      
      The cache events table is identical to Skylake.
      
      Compare to PEBS instruction event on generic counter, fixed counter 0
      has less skid. Force instruction:ppp always in fixed counter 0.
      Originally-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-9-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      60176089
    • P
      perf/x86: Support constraint ranges · 63b79f6e
      Peter Zijlstra 提交于
      Icelake extended the general counters to 8, even when SMT is enabled.
      However only a (large) subset of the events can be used on all 8
      counters.
      
      The events that can or cannot be used on all counters are organized
      in ranges.
      
      A lot of scheduler constraints are required to handle all this.
      
      To avoid blowing up the tables add event code ranges to the constraint
      tables, and a new inline function to match them.
      Originally-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> # developer hat on
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> # maintainer hat on
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-8-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      63b79f6e
    • A
      perf/x86/lbr: Avoid reading the LBRs when adaptive PEBS handles them · d3617b98
      Andi Kleen 提交于
      With adaptive PEBS the CPU can directly supply the LBR information,
      so we don't need to read it again. But the LBRs still need to be
      enabled. Add a special count to the cpuc that distinguishes these
      two cases, and avoid reading the LBRs unnecessarily when PEBS is
      active.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-7-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d3617b98
    • K
      perf/x86/intel: Support adaptive PEBS v4 · c22497f5
      Kan Liang 提交于
      Adaptive PEBS is a new way to report PEBS sampling information. Instead
      of a fixed size record for all PEBS events it allows to configure the
      PEBS record to only include the information needed. Events can then opt
      in to use such an extended record, or stay with a basic record which
      only contains the IP.
      
      The major new feature is to support LBRs in PEBS record.
      Besides normal LBR, this allows (much faster) large PEBS, while still
      supporting callstacks through callstack LBR. So essentially a lot of
      profiling can now be done without frequent interrupts, dropping the
      overhead significantly.
      
      The main requirement still is to use a period, and not use frequency
      mode, because frequency mode requires reevaluating the frequency on each
      overflow.
      
      The floating point state (XMM) is also supported, which allows efficient
      profiling of FP function arguments.
      
      Introduce specific drain function to handle variable length records.
      Use a new callback to parse the new record format, and also handle the
      STATUS field now being at a different offset.
      
      Add code to set up the configuration register. Since there is only a
      single register, all events either get the full super set of all events,
      or only the basic record.
      Originally-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-6-kan.liang@linux.intel.com
      [ Renamed GPRS => GP. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c22497f5
    • K
      perf/x86/intel/ds: Extract code of event update in short period · 477f00f9
      Kan Liang 提交于
      The drain_pebs() could be called twice in a short period for auto-reload
      event in pmu::read(). The intel_pmu_save_and_restart_reload() should be
      called to update the event->count.
      
      This case should also be handled on Icelake. Extract the code for
      later reuse.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-5-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      477f00f9
    • A
      perf/x86/intel: Extract memory code PEBS parser for reuse · 48f38aa4
      Andi Kleen 提交于
      Extract some code related to memory profiling from the PEBS record
      parser into separate functions. It can be reused by the upcoming
      adaptive PEBS parser. No functional changes.
      Rename intel_hsw_weight to intel_get_tsx_weight, and
      intel_hsw_transaction to intel_get_tsx_transaction. Because the input is
      not the hsw pebs format anymore.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-4-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      48f38aa4
    • K
      perf/x86: Support outputting XMM registers · 878068ea
      Kan Liang 提交于
      Starting from Icelake, XMM registers can be collected in PEBS record.
      But current code only output the pt_regs.
      
      Add a new struct x86_perf_regs for both pt_regs and xmm_regs. The
      xmm_regs will be used later to keep a pointer to PEBS record which has
      XMM information.
      
      XMM registers are 128 bit. To simplify the code, they are handled like
      two different registers, which means setting two bits in the register
      bitmap. This also allows only sampling the lower 64bit bits in XMM.
      
      The index of XMM registers starts from 32. There are 16 XMM registers.
      So all reserved space for regs are used. Remove REG_RESERVED.
      
      Add PERF_REG_X86_XMM_MAX, which stands for the max number of all x86
      regs including both GPRs and XMM.
      
      Add REG_NOSUPPORT for 32bit to exclude unsupported registers.
      
      Previous platforms can not collect XMM information in PEBS record.
      Adding pebs_no_xmm_regs to indicate the unsupported platforms.
      
      The common code still validates the supported registers. However, it
      cannot check model specific registers, e.g. XMM. Add extra check in
      x86_pmu_hw_config() to reject invalid config of regs_user and regs_intr.
      The regs_user never supports XMM collection.
      The regs_intr only supports XMM collection when sampling PEBS event on
      icelake and later platforms.
      Originally-by: NAndi Kleen <ak@linux.intel.com>
      Suggested-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: https://lkml.kernel.org/r/20190402194509.2832-3-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      878068ea
    • S
      perf/x86/intel: Force resched when TFA sysctl is modified · f447e4eb
      Stephane Eranian 提交于
      This patch provides guarantee to the sysadmin that when TFA is disabled, no PMU
      event is using PMC3 when the echo command returns. Vice-Versa, when TFA
      is enabled, PMU can use PMC3 immediately (to eliminate possible multiplexing).
      
        $ perf stat -a -I 1000 --no-merge -e branches,branches,branches,branches
           1.000123979    125,768,725,208      branches
           1.000562520    125,631,000,456      branches
           1.000942898    125,487,114,291      branches
           1.001333316    125,323,363,620      branches
           2.004721306    125,514,968,546      branches
           2.005114560    125,511,110,861      branches
           2.005482722    125,510,132,724      branches
           2.005851245    125,508,967,086      branches
           3.006323475    125,166,570,648      branches
           3.006709247    125,165,650,056      branches
           3.007086605    125,164,639,142      branches
           3.007459298    125,164,402,912      branches
           4.007922698    125,045,577,140      branches
           4.008310775    125,046,804,324      branches
           4.008670814    125,048,265,111      branches
           4.009039251    125,048,677,611      branches
           5.009503373    125,122,240,217      branches
           5.009897067    125,122,450,517      branches
      
      Then on another connection, sysadmin does:
      
        $ echo  1 >/sys/devices/cpu/allow_tsx_force_abort
      
      Then perf stat adjusts the events immediately:
      
           5.010286029    125,121,393,483      branches
           5.010646308    125,120,556,786      branches
           6.011113588    124,963,351,832      branches
           6.011510331    124,964,267,566      branches
           6.011889913    124,964,829,130      branches
           6.012262996    124,965,841,156      branches
           7.012708299    124,419,832,234      branches [79.69%]
           7.012847908    124,416,363,853      branches [79.73%]
           7.013225462    124,400,723,712      branches [79.73%]
           7.013598191    124,376,154,434      branches [79.70%]
           8.014089834    124,250,862,693      branches [74.98%]
           8.014481363    124,267,539,139      branches [74.94%]
           8.014856006    124,259,519,786      branches [74.98%]
           8.014980848    124,225,457,969      branches [75.04%]
           9.015464576    124,204,235,423      branches [75.03%]
           9.015858587    124,204,988,490      branches [75.04%]
           9.016243680    124,220,092,486      branches [74.99%]
           9.016620104    124,231,260,146      branches [74.94%]
      
      And vice-versa if the syadmin does:
      
        $ echo  0 >/sys/devices/cpu/allow_tsx_force_abort
      
      Events are again spread over the 4 counters:
      
          10.017096277    124,276,230,565      branches [74.96%]
          10.017237209    124,228,062,171      branches [75.03%]
          10.017478637    124,178,780,626      branches [75.03%]
          10.017853402    124,198,316,177      branches [75.03%]
          11.018334423    124,602,418,933      branches [85.40%]
          11.018722584    124,602,921,320      branches [85.42%]
          11.019095621    124,603,956,093      branches [85.42%]
          11.019467742    124,595,273,783      branches [85.42%]
          12.019945736    125,110,114,864      branches
          12.020330764    125,109,334,472      branches
          12.020688740    125,109,818,865      branches
          12.021054020    125,108,594,014      branches
          13.021516774    125,109,164,018      branches
          13.021903640    125,108,794,510      branches
          13.022270770    125,107,756,978      branches
          13.022630819    125,109,380,471      branches
          14.023114989    125,133,140,817      branches
          14.023501880    125,133,785,858      branches
          14.023868339    125,133,852,700      branches
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: kan.liang@intel.com
      Cc: nelson.dsouza@intel.com
      Cc: tonyj@suse.com
      Link: https://lkml.kernel.org/r/20190408173252.37932-3-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f447e4eb
    • S
      perf/core: Add perf_pmu_resched() as global function · c68d224e
      Stephane Eranian 提交于
      This patch add perf_pmu_resched() a global function that can be called
      to force rescheduling of events for a given PMU. The function locks
      both cpuctx and task_ctx internally. This will be used by a subsequent
      patch.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      [ Simplified the calling convention. ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: kan.liang@intel.com
      Cc: nelson.dsouza@intel.com
      Cc: tonyj@suse.com
      Link: https://lkml.kernel.org/r/20190408173252.37932-2-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c68d224e
    • I
      cc867094
    • K
      perf/x86: Fix incorrect PEBS_REGS · 9d5dcc93
      Kan Liang 提交于
      PEBS_REGS used as mask for the supported registers for large PEBS.
      However, the mask cannot filter the sample_regs_user/sample_regs_intr
      correctly.
      
      (1ULL << PERF_REG_X86_*) should be used to replace PERF_REG_X86_*, which
      is only the index.
      
      Rename PEBS_REGS to PEBS_GP_REGS, because the mask is only for general
      purpose registers.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Fixes: 2fe1bc1f ("perf/x86: Enable free running PEBS for REGS_USER/INTR")
      Link: https://lkml.kernel.org/r/20190402194509.2832-2-kan.liang@linux.intel.com
      [ Renamed it to PEBS_GP_REGS - as 'GPRS' is used elsewhere ;-) ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      9d5dcc93
    • A
      perf/ring_buffer: Fix AUX record suppression · 339bc418
      Alexander Shishkin 提交于
      The following commit:
      
        1627314f ("perf: Suppress AUX/OVERWRITE records")
      
      has an unintended side-effect of also suppressing all AUX records with no flags
      and non-zero size, so all the regular records in the full trace mode.
      This breaks some use cases for people.
      
      Fix this by restoring "regular" AUX records.
      Reported-by: NBen Gainey <Ben.Gainey@arm.com>
      Tested-by: NBen Gainey <Ben.Gainey@arm.com>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 1627314f ("perf: Suppress AUX/OVERWRITE records")
      Link: https://lkml.kernel.org/r/20190329091338.29999-1-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      339bc418
    • A
      perf/core: Fix the address filtering fix · 52a44f83
      Alexander Shishkin 提交于
      The following recent commit:
      
        c60f83b8 ("perf, pt, coresight: Fix address filters for vmas with non-zero offset")
      
      changes the address filtering logic to communicate filter ranges to the PMU driver
      via a single address range object, instead of having the driver do the final bit of
      math.
      
      That change forgets to take into account kernel filters, which are not calculated
      the same way as DSO based filters.
      
      Fix that by passing the kernel filters the same way as file-based filters.
      This doesn't require any additional changes in the drivers.
      Reported-by: NAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: c60f83b8 ("perf, pt, coresight: Fix address filters for vmas with non-zero offset")
      Link: https://lkml.kernel.org/r/20190329091212.29870-1-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      52a44f83
    • I
      Merge branch 'linus' into perf/core, to pick up fixes · 496156e3
      Ingo Molnar 提交于
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      496156e3
    • M
      kprobes: Fix error check when reusing optimized probes · 5f843ed4
      Masami Hiramatsu 提交于
      The following commit introduced a bug in one of our error paths:
      
        819319fc ("kprobes: Return error if we fail to reuse kprobe instead of BUG_ON()")
      
      it missed to handle the return value of kprobe_optready() as
      error-value. In reality, the kprobe_optready() returns a bool
      result, so "true" case must be passed instead of 0.
      
      This causes some errors on kprobe boot-time selftests on ARM:
      
       [   ] Beginning kprobe tests...
       [   ] Probe ARM code
       [   ]     kprobe
       [   ]     kretprobe
       [   ] ARM instruction simulation
       [   ]     Check decoding tables
       [   ]     Run test cases
       [   ] FAIL: test_case_handler not run
       [   ] FAIL: Test andge	r10, r11, r14, asr r7
       [   ] FAIL: Scenario 11
       ...
       [   ] FAIL: Scenario 7
       [   ] Total instruction simulation tests=1631, pass=1433 fail=198
       [   ] kprobe tests failed
      
      This can happen if an optimized probe is unregistered and next
      kprobe is registered on same address until the previous probe
      is not reclaimed.
      
      If this happens, a hidden aggregated probe may be kept in memory,
      and no new kprobe can probe same address. Also, in that case
      register_kprobe() will return "1" instead of minus error value,
      which can mislead caller logic.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Naveen N . Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org # v5.0+
      Fixes: 819319fc ("kprobes: Return error if we fail to reuse kprobe instead of BUG_ON()")
      Link: http://lkml.kernel.org/r/155530808559.32517.539898325433642204.stgit@devnote2Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5f843ed4
    • L
      Merge tag 'libnvdimm-fixes-5.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm · 618d919c
      Linus Torvalds 提交于
      Pull libnvdimm fixes from Dan Williams:
       "I debated holding this back for the v5.2 merge window due to the size
        of the "zero-key" changes, but affected users would benefit from
        having the fixes sooner. It did not make sense to change the zero-key
        semantic in isolation for the "secure-erase" command, but instead
        include it for all security commands.
      
        The short background on the need for these changes is that some NVDIMM
        platforms enable security with a default zero-key rather than let the
        OS specify the initial key. This makes the security enabling that
        landed in v5.0 unusable for some users.
      
        Summary:
      
         - Compatibility fix for nvdimm-security implementations with a
           default zero-key.
      
         - Miscellaneous small fixes for out-of-bound accesses, cleanup after
           initialization failures, and missing debug messages"
      
      * tag 'libnvdimm-fixes-5.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
        tools/testing/nvdimm: Retain security state after overwrite
        libnvdimm/pmem: fix a possible OOB access when read and write pmem
        libnvdimm/security, acpi/nfit: unify zero-key for all security commands
        libnvdimm/security: provide fix for secure-erase to use zero-key
        libnvdimm/btt: Fix a kmemdup failure check
        libnvdimm/namespace: Fix a potential NULL pointer dereference
        acpi/nfit: Always dump _DSM output payload
      618d919c
    • L
      Merge tag 'fsdax-fix-5.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm · 5512320c
      Linus Torvalds 提交于
      Pull fsdax fix from Dan Williams:
       "A single filesystem-dax fix. It has been lingering in -next for a long
        while and there are no other fsdax fixes on the horizon:
      
         - Avoid a crash scenario with architectures like powerpc that require
           'pgtable_deposit' for the zero page"
      
      * tag 'fsdax-fix-5.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
        fs/dax: Deposit pagetable even when installing zero page
      5512320c
  4. 15 4月, 2019 6 次提交
    • L
      Linux 5.1-rc5 · dc4060a5
      Linus Torvalds 提交于
      dc4060a5
    • L
      Merge branch 'page-refs' (page ref overflow) · 6b3a7077
      Linus Torvalds 提交于
      Merge page ref overflow branch.
      
      Jann Horn reported that he can overflow the page ref count with
      sufficient memory (and a filesystem that is intentionally extremely
      slow).
      
      Admittedly it's not exactly easy.  To have more than four billion
      references to a page requires a minimum of 32GB of kernel memory just
      for the pointers to the pages, much less any metadata to keep track of
      those pointers.  Jann needed a total of 140GB of memory and a specially
      crafted filesystem that leaves all reads pending (in order to not ever
      free the page references and just keep adding more).
      
      Still, we have a fairly straightforward way to limit the two obvious
      user-controllable sources of page references: direct-IO like page
      references gotten through get_user_pages(), and the splice pipe page
      duplication.  So let's just do that.
      
      * branch page-refs:
        fs: prevent page refcount overflow in pipe_buf_get
        mm: prevent get_user_pages() from overflowing page refcount
        mm: add 'try_get_page()' helper function
        mm: make page ref count overflow check tighter and more explicit
      6b3a7077
    • M
      fs: prevent page refcount overflow in pipe_buf_get · 15fab63e
      Matthew Wilcox 提交于
      Change pipe_buf_get() to return a bool indicating whether it succeeded
      in raising the refcount of the page (if the thing in the pipe is a page).
      This removes another mechanism for overflowing the page refcount.  All
      callers converted to handle a failure.
      Reported-by: NJann Horn <jannh@google.com>
      Signed-off-by: NMatthew Wilcox <willy@infradead.org>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      15fab63e
    • L
      mm: prevent get_user_pages() from overflowing page refcount · 8fde12ca
      Linus Torvalds 提交于
      If the page refcount wraps around past zero, it will be freed while
      there are still four billion references to it.  One of the possible
      avenues for an attacker to try to make this happen is by doing direct IO
      on a page multiple times.  This patch makes get_user_pages() refuse to
      take a new page reference if there are already more than two billion
      references to the page.
      Reported-by: NJann Horn <jannh@google.com>
      Acked-by: NMatthew Wilcox <willy@infradead.org>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8fde12ca
    • L
      mm: add 'try_get_page()' helper function · 88b1a17d
      Linus Torvalds 提交于
      This is the same as the traditional 'get_page()' function, but instead
      of unconditionally incrementing the reference count of the page, it only
      does so if the count was "safe".  It returns whether the reference count
      was incremented (and is marked __must_check, since the caller obviously
      has to be aware of it).
      
      Also like 'get_page()', you can't use this function unless you already
      had a reference to the page.  The intent is that you can use this
      exactly like get_page(), but in situations where you want to limit the
      maximum reference count.
      
      The code currently does an unconditional WARN_ON_ONCE() if we ever hit
      the reference count issues (either zero or negative), as a notification
      that the conditional non-increment actually happened.
      
      NOTE! The count access for the "safety" check is inherently racy, but
      that doesn't matter since the buffer we use is basically half the range
      of the reference count (ie we look at the sign of the count).
      Acked-by: NMatthew Wilcox <willy@infradead.org>
      Cc: Jann Horn <jannh@google.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      88b1a17d
    • L
      mm: make page ref count overflow check tighter and more explicit · f958d7b5
      Linus Torvalds 提交于
      We have a VM_BUG_ON() to check that the page reference count doesn't
      underflow (or get close to overflow) by checking the sign of the count.
      
      That's all fine, but we actually want to allow people to use a "get page
      ref unless it's already very high" helper function, and we want that one
      to use the sign of the page ref (without triggering this VM_BUG_ON).
      
      Change the VM_BUG_ON to only check for small underflows (or _very_ close
      to overflowing), and ignore overflows which have strayed into negative
      territory.
      Acked-by: NMatthew Wilcox <willy@infradead.org>
      Cc: Jann Horn <jannh@google.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f958d7b5
  5. 14 4月, 2019 6 次提交
    • L
      Merge tag 'for-linus-20190412' of git://git.kernel.dk/linux-block · 4443f8e6
      Linus Torvalds 提交于
      Pull block fixes from Jens Axboe:
       "Set of fixes that should go into this round. This pull is larger than
        I'd like at this time, but there's really no specific reason for that.
        Some are fixes for issues that went into this merge window, others are
        not. Anyway, this contains:
      
         - Hardware queue limiting for virtio-blk/scsi (Dongli)
      
         - Multi-page bvec fixes for lightnvm pblk
      
         - Multi-bio dio error fix (Jason)
      
         - Remove the cache hint from the io_uring tool side, since we didn't
           move forward with that (me)
      
         - Make io_uring SETUP_SQPOLL root restricted (me)
      
         - Fix leak of page in error handling for pc requests (Jérôme)
      
         - Fix BFQ regression introduced in this merge window (Paolo)
      
         - Fix break logic for bio segment iteration (Ming)
      
         - Fix NVMe cancel request error handling (Ming)
      
         - NVMe pull request with two fixes (Christoph):
             - fix the initial CSN for nvme-fc (James)
             - handle log page offsets properly in the target (Keith)"
      
      * tag 'for-linus-20190412' of git://git.kernel.dk/linux-block:
        block: fix the return errno for direct IO
        nvmet: fix discover log page when offsets are used
        nvme-fc: correct csn initialization and increments on error
        block: do not leak memory in bio_copy_user_iov()
        lightnvm: pblk: fix crash in pblk_end_partial_read due to multipage bvecs
        nvme: cancel request synchronously
        blk-mq: introduce blk_mq_complete_request_sync()
        scsi: virtio_scsi: limit number of hw queues by nr_cpu_ids
        virtio-blk: limit number of hw queues by nr_cpu_ids
        block, bfq: fix use after free in bfq_bfqq_expire
        io_uring: restrict IORING_SETUP_SQPOLL to root
        tools/io_uring: remove IOCQE_FLAG_CACHEHIT
        block: don't use for-inside-for in bio_for_each_segment_all
      4443f8e6
    • L
      Merge tag 'nfs-for-5.1-4' of git://git.linux-nfs.org/projects/trondmy/linux-nfs · b60bc066
      Linus Torvalds 提交于
      Pull NFS client bugfixes from Trond Myklebust:
       "Highlights include:
      
        Stable fix:
      
         - Fix a deadlock in close() due to incorrect draining of RDMA queues
      
        Bugfixes:
      
         - Revert "SUNRPC: Micro-optimise when the task is known not to be
           sleeping" as it is causing stack overflows
      
         - Fix a regression where NFSv4 getacl and fs_locations stopped
           working
      
         - Forbid setting AF_INET6 to "struct sockaddr_in"->sin_family.
      
         - Fix xfstests failures due to incorrect copy_file_range() return
           values"
      
      * tag 'nfs-for-5.1-4' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
        Revert "SUNRPC: Micro-optimise when the task is known not to be sleeping"
        NFSv4.1 fix incorrect return value in copy_file_range
        xprtrdma: Fix helper that drains the transport
        NFS: Fix handling of reply page vector
        NFS: Forbid setting AF_INET6 to "struct sockaddr_in"->sin_family.
      b60bc066
    • L
      Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi · 87af0c38
      Linus Torvalds 提交于
      Pull SCSI fix from James Bottomley:
       "One obvious fix for a ciostor data corruption on error bug"
      
      * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
        scsi: csiostor: fix missing data copy in csio_scsi_err_handler()
      87af0c38
    • L
      Merge tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux · 09bad0df
      Linus Torvalds 提交于
      Pull clk fixes from Stephen Boyd:
       "Here's more than a handful of clk driver fixes for changes that came
        in during the merge window:
      
         - Fix the AT91 sama5d2 programmable clk prescaler formula
      
         - A bunch of Amlogic meson clk driver fixes for the VPU clks
      
         - A DMI quirk for Intel's Bay Trail SoC's driver to properly mark pmc
           clks as critical only when really needed
      
         - Stop overwriting CLK_SET_RATE_PARENT flag in mediatek's clk gate
           implementation
      
         - Use the right structure to test for a frequency table in i.MX's
           PLL_1416x driver"
      
      * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
        clk: imx: Fix PLL_1416X not rounding rates
        clk: mediatek: fix clk-gate flag setting
        platform/x86: pmc_atom: Drop __initconst on dmi table
        clk: x86: Add system specific quirk to mark clocks as critical
        clk: meson: vid-pll-div: remove warning and return 0 on invalid config
        clk: meson: pll: fix rounding and setting a rate that matches precisely
        clk: meson-g12a: fix VPU clock parents
        clk: meson: g12a: fix VPU clock muxes mask
        clk: meson-gxbb: round the vdec dividers to closest
        clk: at91: fix programmable clock for sama5d2
      09bad0df
    • L
      Merge tag 'pci-v5.1-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci · a3b84248
      Linus Torvalds 提交于
      Pull PCI fixes from Bjorn Helgaas:
      
       - Add a DMA alias quirk for another Marvell SATA device (Andre
         Przywara)
      
       - Fix a pciehp regression that broke safe removal of devices (Sergey
         Miroshnichenko)
      
      * tag 'pci-v5.1-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
        PCI: pciehp: Ignore Link State Changes after powering off a slot
        PCI: Add function 1 DMA alias quirk for Marvell 9170 SATA controller
      a3b84248
    • L
      Merge tag 'powerpc-5.1-5' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux · cf60528f
      Linus Torvalds 提交于
      Pull powerpc fixes from Michael Ellerman:
       "A minor build fix for 64-bit FLATMEM configs.
      
        A fix for a boot failure on 32-bit powermacs.
      
        My commit to fix CLOCK_MONOTONIC across Y2038 broke the 32-bit VDSO on
        64-bit kernels, ie. compat mode, which is only used on big endian.
      
        The rewrite of the SLB code we merged in 4.20 missed the fact that the
        0x380 exception is also used with the Radix MMU to report out of range
        accesses. This could lead to an oops if userspace tried to read from
        addresses outside the user or kernel range.
      
        Thanks to: Aneesh Kumar K.V, Christophe Leroy, Larry Finger, Nicholas
        Piggin"
      
      * tag 'powerpc-5.1-5' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
        powerpc/mm: Define MAX_PHYSMEM_BITS for all 64-bit configs
        powerpc/64s/radix: Fix radix segment exception handling
        powerpc/vdso32: fix CLOCK_MONOTONIC on PPC64
        powerpc/32: Fix early boot failure with RTAS built-in
      cf60528f
  6. 13 4月, 2019 4 次提交