1. 04 11月, 2016 17 次提交
  2. 03 11月, 2016 5 次提交
  3. 30 10月, 2016 1 次提交
  4. 29 10月, 2016 10 次提交
    • T
      x86/smpboot: Init apic mapping before usage · 1e90a13d
      Thomas Gleixner 提交于
      The recent changes, which forced the registration of the boot cpu on UP
      systems, which do not have ACPI tables, have been fixed for systems w/o
      local APIC, but left a wreckage for systems which have neither ACPI nor
      mptables, but the CPU has an APIC, e.g. virtualbox.
      
      The boot process crashes in prefill_possible_map() as it wants to register
      the boot cpu, which needs to access the local apic, but the local APIC is
      not yet mapped.
      
      There is no reason why init_apic_mapping() can't be invoked before
      prefill_possible_map(). So instead of playing another silly early mapping
      game, as the ACPI/mptables code does, we just move init_apic_mapping()
      before the call to prefill_possible_map().
      
      In hindsight, I should have noticed that combination earlier.
      
      Sorry for the churn (also in stable)!
      
      Fixes: ff856051 ("x86/boot/smp: Don't try to poke disabled/non-existent APIC")
      Reported-and-debugged-by: NMichal Necasek <michal.necasek@oracle.com>
      Reported-and-tested-by: NWolfgang Bauer <wbauer@tmo.at>
      Cc: prarit@redhat.com
      Cc: ville.syrjala@linux.intel.com
      Cc: michael.thayer@oracle.com
      Cc: knut.osmundsen@oracle.com
      Cc: frank.mehnert@oracle.com
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1610282114380.5053@nanosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      1e90a13d
    • V
      ARC: module: print pretty section names · b75dcd9c
      Vineet Gupta 提交于
      Now that we have referece to section name string table in
      apply_relocate_add(), use it to
      
       - print the name of section being relocated
       - print symbol with NULL name (since it refers to a section)
      
      before
      
      | Section to fixup 7000a060
      | =========================================================
      | rela->r_off | rela->addend | sym->st_value | ADDR | VALUE
      | =========================================================
      |	1c		0		7000e000  7000a07c 7000e000 []
      |	40		0		7000a000  7000a0a0 7000a000 []
      
      after
      
      | Section to fixup .eh_frame @7000a060
      | =========================================================
      | r_off	r_add	st_value ADDRESS  VALUE
      | =========================================================
      |    1c	0	7000e000 7000a07c 7000e000 [.init.text]
      |    40	0	7000a000 7000a0a0 7000a000 [.exit.text]
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      b75dcd9c
    • V
      ARC: module: elide loop to save reference to .eh_frame · d65283f7
      Vineet Gupta 提交于
      The loop was really needed in .debug_frame regime where wanted make it
      as SH_ALLOC so that apply_relocate_add() would process it. That's not
      needed for .eh_frame, so we check this in apply_relocate_add() which
      gets called for each section.
      
      Note that we need to save reference to "section name strings" section in
      module_frob_arch_sections() since apply_relocate_add() doesn't get that
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      d65283f7
    • V
      ARC: mm: retire ARC_DBG_TLB_MISS_COUNT... · f644e368
      Vineet Gupta 提交于
      ... given that we have perf counters abel to do the same thing non
      intrusively
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      f644e368
    • V
      ARC: build: retire old toggles · c3005475
      Vineet Gupta 提交于
      These are really ancient toggles and tools no longer require them to be
      passed. This paves way for deprecating them in long run.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      c3005475
    • V
      ARC: boot log: refactor cpu name/release printing · d975cbc8
      Vineet Gupta 提交于
      The motivation is to identify ARC750 vs. ARC770 (we currently print
      generic "ARC700").
      
      A given ARC700 release could be 750 or 770, with same ARCNUM (or family
      identifier which is unfortunate). The existing arc_cpu_tbl[] kept a single
      concatenated string for core name and release which thus doesn't work
      for 750 vs. 770 identification.
      
      So split this into 2 tables, one with core names and other with release.
      And while we are at it, get rid of the range checking for family numbers.
      We just document the known to exist cores running Linux and ditch
      others.
      
      With this in place, we add detection of ARC750 which is
       - cores 0x33 and before
       - cores 0x34 and later with MMUv2
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      d975cbc8
    • V
      d7c46114
    • V
      ARC: boot log: don't assume SWAPE instruction support · a024fd9b
      Vineet Gupta 提交于
      This came to light when helping a customer with oldish ARC750 core who
      were getting instruction errors because of lack of SWAPE but boot log
      was incorrectly printing it as being present
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      a024fd9b
    • V
      ARC: boot log: refactor printing abt features not captured in BCRs · 73e284d2
      Vineet Gupta 提交于
      On older arc700 cores, some of the features configured were not present
      in Build config registers. To print about them at boot, we just use the
      Kconfig option i.e. whether linux is built to use them or not.
      So yes this seems bogus, but what else can be done. Moreover if linux is
      booting with these enabled, then the Kconfig info is a good indicator
      anyways.
      
      Over time these "hacks" accumulated in read_arc_build_cfg_regs() as well
      as arc_cpu_mumbojumbo(). so refactor and move all of those in a single
      place: read_arc_build_cfg_regs(). This causes some code redcution too:
      
      | bloat-o-meter2 arch/arc/kernel/setup.o.0 arch/arc/kernel/setup.o.1
      | add/remove: 0/0 grow/shrink: 2/1 up/down: 64/-132 (-68)
      | function                                     old     new   delta
      | setup_processor                              610     670     +60
      | cpuinfo_arc700                                76      80      +4
      | arc_cpu_mumbojumbo                           752     620    -132
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      73e284d2
    • V
      ARCv2: boot log: print IOC exists as well as enabled status · 711c1f26
      Vineet Gupta 提交于
      Previously we would not print the case when IOC existed but was not
      enabled.
      
      And while at it, reduce one line off boot printing by consolidating
      the Peripheral address space and IO-Coherency which in a way
      applies to them
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      711c1f26
  5. 28 10月, 2016 7 次提交
    • I
      perf/x86/intel: Honour the CPUID for number of fixed counters in hypervisors · f92b7604
      Imre Palik 提交于
      perf doesn't seem to honour the number of fixed counters specified by CPUID
      leaf 0xa. It always assumes that Intel CPUs have at least 3 fixed counters.
      
      So if some of the fixed counters are masked out by the hypervisor, it still
      tries to check/set them.
      
      This patch makes perf behave nicer when the kernel is running under a
      hypervisor that doesn't expose all the counters.
      
      This patch contains some ideas from Matt Wilson.
      Signed-off-by: NImre Palik <imrep@amazon.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Alexander Kozyrev <alexander.kozyrev@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Artyom Kuanbekov <artyom.kuanbekov@intel.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Wilson <msw@amazon.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1477037939-15605-1-git-send-email-imrep.amz@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f92b7604
    • J
      perf/powerpc: Don't call perf_event_disable() from atomic context · 5aab90ce
      Jiri Olsa 提交于
      The trinity syscall fuzzer triggered following WARN() on powerpc:
      
        WARNING: CPU: 9 PID: 2998 at arch/powerpc/kernel/hw_breakpoint.c:278
        ...
        NIP [c00000000093aedc] .hw_breakpoint_handler+0x28c/0x2b0
        LR [c00000000093aed8] .hw_breakpoint_handler+0x288/0x2b0
        Call Trace:
        [c0000002f7933580] [c00000000093aed8] .hw_breakpoint_handler+0x288/0x2b0 (unreliable)
        [c0000002f7933630] [c0000000000f671c] .notifier_call_chain+0x7c/0xf0
        [c0000002f79336d0] [c0000000000f6abc] .__atomic_notifier_call_chain+0xbc/0x1c0
        [c0000002f7933780] [c0000000000f6c40] .notify_die+0x70/0xd0
        [c0000002f7933820] [c00000000001a74c] .do_break+0x4c/0x100
        [c0000002f7933920] [c0000000000089fc] handle_dabr_fault+0x14/0x48
      
      Followed by a lockdep warning:
      
        ===============================
        [ INFO: suspicious RCU usage. ]
        4.8.0-rc5+ #7 Tainted: G        W
        -------------------------------
        ./include/linux/rcupdate.h:556 Illegal context switch in RCU read-side critical section!
      
        other info that might help us debug this:
      
        rcu_scheduler_active = 1, debug_locks = 0
        2 locks held by ls/2998:
         #0:  (rcu_read_lock){......}, at: [<c0000000000f6a00>] .__atomic_notifier_call_chain+0x0/0x1c0
         #1:  (rcu_read_lock){......}, at: [<c00000000093ac50>] .hw_breakpoint_handler+0x0/0x2b0
      
        stack backtrace:
        CPU: 9 PID: 2998 Comm: ls Tainted: G        W       4.8.0-rc5+ #7
        Call Trace:
        [c0000002f7933150] [c00000000094b1f8] .dump_stack+0xe0/0x14c (unreliable)
        [c0000002f79331e0] [c00000000013c468] .lockdep_rcu_suspicious+0x138/0x180
        [c0000002f7933270] [c0000000001005d8] .___might_sleep+0x278/0x2e0
        [c0000002f7933300] [c000000000935584] .mutex_lock_nested+0x64/0x5a0
        [c0000002f7933410] [c00000000023084c] .perf_event_ctx_lock_nested+0x16c/0x380
        [c0000002f7933500] [c000000000230a80] .perf_event_disable+0x20/0x60
        [c0000002f7933580] [c00000000093aeec] .hw_breakpoint_handler+0x29c/0x2b0
        [c0000002f7933630] [c0000000000f671c] .notifier_call_chain+0x7c/0xf0
        [c0000002f79336d0] [c0000000000f6abc] .__atomic_notifier_call_chain+0xbc/0x1c0
        [c0000002f7933780] [c0000000000f6c40] .notify_die+0x70/0xd0
        [c0000002f7933820] [c00000000001a74c] .do_break+0x4c/0x100
        [c0000002f7933920] [c0000000000089fc] handle_dabr_fault+0x14/0x48
      
      While it looks like the first WARN() is probably valid, the other one is
      triggered by disabling event via perf_event_disable() from atomic context.
      
      The event is disabled here in case we were not able to emulate
      the instruction that hit the breakpoint. By disabling the event
      we unschedule the event and make sure it's not scheduled back.
      
      But we can't call perf_event_disable() from atomic context, instead
      we need to use the event's pending_disable irq_work method to disable it.
      Reported-by: NJan Stancek <jstancek@redhat.com>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20161026094824.GA21397@kravaSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5aab90ce
    • B
      x86/microcode/AMD: Fix more fallout from CONFIG_RANDOMIZE_MEMORY=y · 1c27f646
      Borislav Petkov 提交于
      We needed the physical address of the container in order to compute the
      offset within the relocated ramdisk. And we did this by doing __pa() on
      the virtual address.
      
      However, __pa() does checks whether the physical address is within
      PAGE_OFFSET and __START_KERNEL_map - see __phys_addr() - which fail
      if we have CONFIG_RANDOMIZE_MEMORY enabled: we feed a virtual address
      which *doesn't* have the randomization offset into a function which uses
      PAGE_OFFSET which *does* have that offset.
      
      This makes this check fire:
      
      	VIRTUAL_BUG_ON((x > y) || !phys_addr_valid(x));
      			^^^^^^
      
      due to the randomization offset.
      
      The fix is as simple as using __pa_nodebug() because we do that
      randomization offset accounting later in that function ourselves.
      Reported-by: NBob Peterson <rpeterso@redhat.com>
      Tested-by: NBob Peterson <rpeterso@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm <linux-mm@kvack.org>
      Cc: stable@vger.kernel.org # 4.9
      Link: http://lkml.kernel.org/r/20161027123623.j2jri5bandimboff@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1c27f646
    • U
      cris/arch-v32: cryptocop: print a hex number after a 0x prefix · 17a88939
      Uwe Kleine-König 提交于
      It makes the result hard to interpret correctly if a base 10 number is
      prefixed by 0x.  So change to a hex number.
      
      Link: http://lkml.kernel.org/r/20161026125658.25728-6-u.kleine-koenig@pengutronix.deSigned-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      17a88939
    • M
      kconfig.h: remove config_enabled() macro · c0a0aba8
      Masahiro Yamada 提交于
      The use of config_enabled() is ambiguous.  For config options,
      IS_ENABLED(), IS_REACHABLE(), etc.  will make intention clearer.
      Sometimes config_enabled() has been used for non-config options because
      it is useful to check whether the given symbol is defined or not.
      
      I have been tackling on deprecating config_enabled(), and now is the
      time to finish this work.
      
      Some new users have appeared for v4.9-rc1, but it is trivial to replace
      them:
      
       - arch/x86/mm/kaslr.c
        replace config_enabled() with IS_ENABLED() because
        CONFIG_X86_ESPFIX64 and CONFIG_EFI are boolean.
      
       - include/asm-generic/export.h
        replace config_enabled() with __is_defined().
      
      Then, config_enabled() can be removed now.
      
      Going forward, please use IS_ENABLED(), IS_REACHABLE(), etc. for config
      options, and __is_defined() for non-config symbols.
      
      Link: http://lkml.kernel.org/r/1476616078-32252-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Michal Marek <mmarek@suse.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Paul Bolle <pebolle@tiscali.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c0a0aba8
    • M
      h8300: fix syscall restarting · 21753583
      Mark Rutland 提交于
      Back in commit f56141e3 ("all arches, signal: move restart_block to
      struct task_struct"), all architectures and core code were changed to
      use task_struct::restart_block.  However, when h8300 support was
      subsequently restored in v4.2, it was not updated to account for this,
      and maintains thread_info::restart_block, which is not kept in sync.
      
      This patch drops the redundant restart_block from thread_info, and moves
      h8300 to the common one in task_struct, ensuring that syscall restarting
      always works as expected.
      
      Fixes: f56141e3 ("all arches, signal: move restart_block to struct task_struct")
      Link: http://lkml.kernel.org/r/1476714934-11635-1-git-send-email-mark.rutland@arm.comSigned-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      Cc: <stable@vger.kernel.org>	[4.2+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      21753583
    • D
      sparc64: Handle extremely large kernel TLB range flushes more gracefully. · a74ad5e6
      David S. Miller 提交于
      When the vmalloc area gets fragmented, and because the firmware
      mapping area sits between where modules live and the vmalloc area, we
      can sometimes receive requests for enormous kernel TLB range flushes.
      
      When this happens the cpu just spins flushing billions of pages and
      this triggers the NMI watchdog and other problems.
      
      We took care of this on the TSB side by doing a linear scan of the
      table once we pass a certain threshold.
      
      Do something similar for the TLB flush, however we are limited by
      the TLB flush facilities provided by the different chip variants.
      
      First of all we use an (mostly arbitrary) cut-off of 256K which is
      about 32 pages.  This can be tuned in the future.
      
      The huge range code path for each chip works as follows:
      
      1) On spitfire we flush all non-locked TLB entries using diagnostic
         acceses.
      
      2) On cheetah we use the "flush all" TLB flush.
      
      3) On sun4v/hypervisor we do a TLB context flush on context 0, which
         unlike previous chips does not remove "permanent" or locked
         entries.
      
      We could probably do something better on spitfire, such as limiting
      the flush to kernel TLB entries or even doing range comparisons.
      However that probably isn't worth it since those chips are old and
      the TLB only had 64 entries.
      Reported-by: NJames Clarke <jrtc27@jrtc27.com>
      Tested-by: NJames Clarke <jrtc27@jrtc27.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a74ad5e6