1. 29 6月, 2019 1 次提交
    • T
      x86/timer: Skip PIT initialization on modern chipsets · c8c40767
      Thomas Gleixner 提交于
      Recent Intel chipsets including Skylake and ApolloLake have a special
      ITSSPRC register which allows the 8254 PIT to be gated.  When gated, the
      8254 registers can still be programmed as normal, but there are no IRQ0
      timer interrupts.
      
      Some products such as the Connex L1430 and exone go Rugged E11 use this
      register to ship with the PIT gated by default. This causes Linux to fail
      to boot:
      
        Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with
        apic=debug and send a report.
      
      The panic happens before the framebuffer is initialized, so to the user, it
      appears as an early boot hang on a black screen.
      
      Affected products typically have a BIOS option that can be used to enable
      the 8254 and make Linux work (Chipset -> South Cluster Configuration ->
      Miscellaneous Configuration -> 8254 Clock Gating), however it would be best
      to make Linux support the no-8254 case.
      
      Modern sytems allow to discover the TSC and local APIC timer frequencies,
      so the calibration against the PIT is not required. These systems have
      always running timers and the local APIC timer works also in deep power
      states.
      
      So the setup of the PIT including the IO-APIC timer interrupt delivery
      checks are a pointless exercise.
      
      Skip the PIT setup and the IO-APIC timer interrupt checks on these systems,
      which avoids the panic caused by non ticking PITs and also speeds up the
      boot process.
      
      Thanks to Daniel for providing the changelog, initial analysis of the
      problem and testing against a variety of machines.
      Reported-by: NDaniel Drake <drake@endlessm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NDaniel Drake <drake@endlessm.com>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Cc: linux@endlessm.com
      Cc: rafael.j.wysocki@intel.com
      Cc: hdegoede@redhat.com
      Link: https://lkml.kernel.org/r/20190628072307.24678-1-drake@endlessm.com
      c8c40767
  2. 23 6月, 2019 1 次提交
  3. 17 6月, 2019 1 次提交
  4. 09 5月, 2019 3 次提交
  5. 06 5月, 2019 1 次提交
  6. 05 5月, 2019 1 次提交
    • J
      perf/x86/intel: Fix race in intel_pmu_disable_event() · 6f55967a
      Jiri Olsa 提交于
      New race in x86_pmu_stop() was introduced by replacing the
      atomic __test_and_clear_bit() of cpuc->active_mask by separate
      test_bit() and __clear_bit() calls in the following commit:
      
        3966c3fe ("x86/perf/amd: Remove need to check "running" bit in NMI handler")
      
      The race causes panic for PEBS events with enabled callchains:
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
        ...
        RIP: 0010:perf_prepare_sample+0x8c/0x530
        Call Trace:
         <NMI>
         perf_event_output_forward+0x2a/0x80
         __perf_event_overflow+0x51/0xe0
         handle_pmi_common+0x19e/0x240
         intel_pmu_handle_irq+0xad/0x170
         perf_event_nmi_handler+0x2e/0x50
         nmi_handle+0x69/0x110
         default_do_nmi+0x3e/0x100
         do_nmi+0x11a/0x180
         end_repeat_nmi+0x16/0x1a
        RIP: 0010:native_write_msr+0x6/0x20
        ...
         </NMI>
         intel_pmu_disable_event+0x98/0xf0
         x86_pmu_stop+0x6e/0xb0
         x86_pmu_del+0x46/0x140
         event_sched_out.isra.97+0x7e/0x160
        ...
      
      The event is configured to make samples from PEBS drain code,
      but when it's disabled, we'll go through NMI path instead,
      where data->callchain will not get allocated and we'll crash:
      
                x86_pmu_stop
                  test_bit(hwc->idx, cpuc->active_mask)
                  intel_pmu_disable_event(event)
                  {
                    ...
                    intel_pmu_pebs_disable(event);
                    ...
      
      EVENT OVERFLOW ->  <NMI>
                           intel_pmu_handle_irq
                             handle_pmi_common
         TEST PASSES ->        test_bit(bit, cpuc->active_mask))
                                 perf_event_overflow
                                   perf_prepare_sample
                                   {
                                     ...
                                     if (!(sample_type & __PERF_SAMPLE_CALLCHAIN_EARLY))
                                           data->callchain = perf_callchain(event, regs);
      
               CRASH ->              size += data->callchain->nr;
                                   }
                         </NMI>
                    ...
                    x86_pmu_disable_event(event)
                  }
      
                  __clear_bit(hwc->idx, cpuc->active_mask);
      
      Fixing this by disabling the event itself before setting
      off the PEBS bit.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Arcari <darcari@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Lendacky Thomas <Thomas.Lendacky@amd.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 3966c3fe ("x86/perf/amd: Remove need to check "running" bit in NMI handler")
      Link: http://lkml.kernel.org/r/20190504151556.31031-1-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6f55967a
  7. 03 5月, 2019 2 次提交
    • A
      perf/x86/intel/pt: Remove software double buffering PMU capability · 72e830f6
      Alexander Shishkin 提交于
      Now that all AUX allocations are high-order by default, the software
      double buffering PMU capability doesn't make sense any more, get rid
      of it. In case some PMUs choose to opt out, we can re-introduce it.
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: adrian.hunter@intel.com
      Link: http://lkml.kernel.org/r/20190503085536.24119-3-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      72e830f6
    • K
      perf/x86/amd: Update generic hardware cache events for Family 17h · 0e3b74e2
      Kim Phillips 提交于
      Add a new amd_hw_cache_event_ids_f17h assignment structure set
      for AMD families 17h and above, since a lot has changed.  Specifically:
      
      L1 Data Cache
      
      The data cache access counter remains the same on Family 17h.
      
      For DC misses, PMCx041's definition changes with Family 17h,
      so instead we use the L2 cache accesses from L1 data cache
      misses counter (PMCx060,umask=0xc8).
      
      For DC hardware prefetch events, Family 17h breaks compatibility
      for PMCx067 "Data Prefetcher", so instead, we use PMCx05a "Hardware
      Prefetch DC Fills."
      
      L1 Instruction Cache
      
      PMCs 0x80 and 0x81 (32-byte IC fetches and misses) are backward
      compatible on Family 17h.
      
      For prefetches, we remove the erroneous PMCx04B assignment which
      counts how many software data cache prefetch load instructions were
      dispatched.
      
      LL - Last Level Cache
      
      Removing PMCs 7D, 7E, and 7F assignments, as they do not exist
      on Family 17h, where the last level cache is L3.  L3 counters
      can be accessed using the existing AMD Uncore driver.
      
      Data TLB
      
      On Intel machines, data TLB accesses ("dTLB-loads") are assigned
      to counters that count load/store instructions retired.  This
      is inconsistent with instruction TLB accesses, where Intel
      implementations report iTLB misses that hit in the STLB.
      
      Ideally, dTLB-loads would count higher level dTLB misses that hit
      in lower level TLBs, and dTLB-load-misses would report those
      that also missed in those lower-level TLBs, therefore causing
      a page table walk.  That would be consistent with instruction
      TLB operation, remove the redundancy between dTLB-loads and
      L1-dcache-loads, and prevent perf from producing artificially
      low percentage ratios, i.e. the "0.01%" below:
      
              42,550,869      L1-dcache-loads
              41,591,860      dTLB-loads
                   4,802      dTLB-load-misses          #    0.01% of all dTLB cache hits
               7,283,682      L1-dcache-stores
               7,912,392      dTLB-stores
                     310      dTLB-store-misses
      
      On AMD Families prior to 17h, the "Data Cache Accesses" counter is
      used, which is slightly better than load/store instructions retired,
      but still counts in terms of individual load/store operations
      instead of TLB operations.
      
      So, for AMD Families 17h and higher, this patch assigns "dTLB-loads"
      to a counter for L1 dTLB misses that hit in the L2 dTLB, and
      "dTLB-load-misses" to a counter for L1 DTLB misses that caused
      L2 DTLB misses and therefore also caused page table walks.  This
      results in a much more accurate view of data TLB performance:
      
              60,961,781      L1-dcache-loads
                   4,601      dTLB-loads
                     963      dTLB-load-misses          #   20.93% of all dTLB cache hits
      
      Note that for all AMD families, data loads and stores are combined
      in a single accesses counter, so no 'L1-dcache-stores' are reported
      separately, and stores are counted with loads in 'L1-dcache-loads'.
      
      Also note that the "% of all dTLB cache hits" string is misleading
      because (a) "dTLB cache": although TLBs can be considered caches for
      page tables, in this context, it can be misinterpreted as data cache
      hits because the figures are similar (at least on Intel), and (b) not
      all those loads (technically accesses) technically "hit" at that
      hardware level.  "% of all dTLB accesses" would be more clear/accurate.
      
      Instruction TLB
      
      On Intel machines, 'iTLB-loads' measure iTLB misses that hit in the
      STLB, and 'iTLB-load-misses' measure iTLB misses that also missed in
      the STLB and completed a page table walk.
      
      For AMD Family 17h and above, for 'iTLB-loads' we replace the
      erroneous instruction cache fetches counter with PMCx084
      "L1 ITLB Miss, L2 ITLB Hit".
      
      For 'iTLB-load-misses' we still use PMCx085 "L1 ITLB Miss,
      L2 ITLB Miss", but set a 0xff umask because without it the event
      does not get counted.
      
      Branch Predictor (BPU)
      
      PMCs 0xc2 and 0xc3 continue to be valid across all AMD Families.
      
      Node Level Events
      
      Family 17h does not have a PMCx0e9 counter, and corresponding counters
      have not been made available publicly, so for now, we mark them as
      unsupported for Families 17h and above.
      
      Reference:
      
        "Open-Source Register Reference For AMD Family 17h Processors Models 00h-2Fh"
        Released 7/17/2018, Publication #56255, Revision 3.03:
        https://www.amd.com/system/files/TechDocs/56255_OSRR.pdf
      
      [ mingo: tidied up the line breaks. ]
      Signed-off-by: NKim Phillips <kim.phillips@amd.com>
      Cc: <stable@vger.kernel.org> # v4.9+
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Liška <mliska@suse.cz>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Pu Wen <puwen@hygon.cn>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-perf-users@vger.kernel.org
      Fixes: e40ed154 ("perf/x86: Add perf support for AMD family-17h processors")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      0e3b74e2
  8. 02 5月, 2019 1 次提交
    • L
      gcc-9: properly declare the {pv,hv}clock_page storage · 459e3a21
      Linus Torvalds 提交于
      The pvlock_page and hvclock_page variables are (as the name implies)
      addresses to pages, created by the linker script.
      
      But we declared them as just "extern u8" variables, which _works_, but
      now that gcc does some more bounds checking, it causes warnings like
      
          warning: array subscript 1 is outside array bounds of ‘u8[1]’
      
      when we then access more than one byte from those variables.
      
      Fix this by simply making the declaration of the variables match
      reality, which makes the compiler happy too.
      Signed-off-by: NLinus Torvalds <torvalds@-linux-foundation.org>
      459e3a21
  9. 01 5月, 2019 4 次提交
  10. 30 4月, 2019 20 次提交
  11. 29 4月, 2019 2 次提交
    • T
      x86/stacktrace: Use common infrastructure · 3599fe12
      Thomas Gleixner 提交于
      Replace the stack_trace_save*() functions with the new arch_stack_walk()
      interfaces.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: linux-arch@vger.kernel.org
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: linux-mm@kvack.org
      Cc: David Rientjes <rientjes@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: iommu@lists.linux-foundation.org
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: David Sterba <dsterba@suse.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: linux-btrfs@vger.kernel.org
      Cc: dm-devel@redhat.com
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: intel-gfx@lists.freedesktop.org
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: dri-devel@lists.freedesktop.org
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Link: https://lkml.kernel.org/r/20190425094803.816485461@linutronix.de
      3599fe12
    • K
      perf/x86: Make perf callchains work without CONFIG_FRAME_POINTER · d15d3568
      Kairui Song 提交于
      Currently perf callchain doesn't work well with ORC unwinder
      when sampling from trace point. We'll get useless in kernel callchain
      like this:
      
      perf  6429 [000]    22.498450:             kmem:mm_page_alloc: page=0x176a17 pfn=1534487 order=0 migratetype=0 gfp_flags=GFP_KERNEL
          ffffffffbe23e32e __alloc_pages_nodemask+0x22e (/lib/modules/5.1.0-rc3+/build/vmlinux)
      	7efdf7f7d3e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
      	5651468729c1 [unknown] (/usr/bin/perf)
      	5651467ee82a main+0x69a (/usr/bin/perf)
      	7efdf7eaf413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
          5541f689495641d7 [unknown] ([unknown])
      
      The root cause is that, for trace point events, it doesn't provide a
      real snapshot of the hardware registers. Instead perf tries to get
      required caller's registers and compose a fake register snapshot
      which suppose to contain enough information for start a unwinding.
      However without CONFIG_FRAME_POINTER, if failed to get caller's BP as the
      frame pointer, so current frame pointer is returned instead. We get
      a invalid register combination which confuse the unwinder, and end the
      stacktrace early.
      
      So in such case just don't try dump BP, and let the unwinder start
      directly when the register is not a real snapshot. Use SP
      as the skip mark, unwinder will skip all the frames until it meet
      the frame of the trace point caller.
      
      Tested with frame pointer unwinder and ORC unwinder, this makes perf
      callchain get the full kernel space stacktrace again like this:
      
      perf  6503 [000]  1567.570191:             kmem:mm_page_alloc: page=0x16c904 pfn=1493252 order=0 migratetype=0 gfp_flags=GFP_KERNEL
          ffffffffb523e2ae __alloc_pages_nodemask+0x22e (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52383bd __get_free_pages+0xd (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52fd28a __pollwait+0x8a (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb521426f perf_poll+0x2f (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52fe3e2 do_sys_poll+0x252 (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb52ff027 __x64_sys_poll+0x37 (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb500418b do_syscall_64+0x5b (/lib/modules/5.1.0-rc3+/build/vmlinux)
          ffffffffb5a0008c entry_SYSCALL_64_after_hwframe+0x44 (/lib/modules/5.1.0-rc3+/build/vmlinux)
      	7f71e92d03e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
      	55a22960d9c1 [unknown] (/usr/bin/perf)
      	55a22958982a main+0x69a (/usr/bin/perf)
      	7f71e9202413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
          5541f689495641d7 [unknown] ([unknown])
      Co-developed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NKairui Song <kasong@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20190422162652.15483-1-kasong@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d15d3568
  12. 27 4月, 2019 1 次提交
    • R
      KVM: VMX: Move RSB stuffing to before the first RET after VM-Exit · f2fde6a5
      Rick Edgecombe 提交于
      The not-so-recent change to move VMX's VM-Exit handing to a dedicated
      "function" unintentionally exposed KVM to a speculative attack from the
      guest by executing a RET prior to stuffing the RSB.  Make RSB stuffing
      happen immediately after VM-Exit, before any unpaired returns.
      
      Alternatively, the VM-Exit path could postpone full RSB stuffing until
      its current location by stuffing the RSB only as needed, or by avoiding
      returns in the VM-Exit path entirely, but both alternatives are beyond
      ugly since vmx_vmexit() has multiple indirect callers (by way of
      vmx_vmenter()).  And putting the RSB stuffing immediately after VM-Exit
      makes it much less likely to be re-broken in the future.
      
      Note, the cost of PUSH/POP could be avoided in the normal flow by
      pairing the PUSH RAX with the POP RAX in __vmx_vcpu_run() and adding an
      a POP to nested_vmx_check_vmentry_hw(), but such a weird/subtle
      dependency is likely to cause problems in the long run, and PUSH/POP
      will take all of a few cycles, which is peanuts compared to the number
      of cycles required to fill the RSB.
      
      Fixes: 453eafbe ("KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines")
      Reported-by: NRick Edgecombe <rick.p.edgecombe@intel.com>
      Signed-off-by: NRick Edgecombe <rick.p.edgecombe@intel.com>
      Co-developed-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f2fde6a5
  13. 26 4月, 2019 2 次提交
    • N
      x86/mm/tlb: Remove 'struct flush_tlb_info' from the stack · 3db6d5a5
      Nadav Amit 提交于
      Move flush_tlb_info variables off the stack. This allows to align
      flush_tlb_info to cache-line and avoid potentially unnecessary cache
      line movements. It also allows to have a fixed virtual-to-physical
      translation of the variables, which reduces TLB misses.
      
      Use per-CPU struct for flush_tlb_mm_range() and
      flush_tlb_kernel_range(). Add debug assertions to ensure there are
      no nested TLB flushes that might overwrite the per-CPU data. For
      arch_tlbbatch_flush() use a const struct.
      
      Results when running a microbenchmarks that performs 10^6 MADV_DONTEED
      operations and touching a page, in which 3 additional threads run a
      busy-wait loop (5 runs, PTI and retpolines are turned off):
      
      			base		off-stack
      			----		---------
        avg (usec/op)		1.629		1.570	(-3%)
        stddev		0.014		0.009
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20190425230143.7008-1-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3db6d5a5
    • R
      x86: tsc: Rework time_cpufreq_notifier() · c208ac8f
      Rafael J. Wysocki 提交于
      There are problems with running time_cpufreq_notifier() on SMP
      systems.
      
      First off, the rdtsc() called from there runs on the CPU executing
      that code and not necessarily on the CPU whose sched_clock() rate is
      updated which is questionable at best.
      
      Second, in the cases when the frequencies of all CPUs in an SMP
      system are always in sync, it is not sufficient to update just
      one of them or the set associated with a given cpufreq policy on
      frequency changes - all CPUs in the system should be updated and
      that would require more than a simple transition notifier.
      
      Note, however, that the underlying issue (the TSC rate depending on
      the CPU frequency) has not been present in hardware shipping for the
      last few years and in quite a few relevant cases (acpi-cpufreq in
      particular) running time_cpufreq_notifier() will cause the TSC to
      be marked as unstable anyway.
      
      For this reason, make time_cpufreq_notifier() simply mark the TSC
      as unstable and give up when run on SMP and only try to carry out
      any adjustments otherwise.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NViresh Kumar <viresh.kumar@linaro.org>
      c208ac8f