1. 23 7月, 2013 1 次提交
    • J
      kprobes/x86: Call out into INT3 handler directly instead of using notifier · 17f41571
      Jiri Kosina 提交于
      In fd4363ff ("x86: Introduce int3 (breakpoint)-based
      instruction patching"), the mechanism that was introduced for
      notifying alternatives code from int3 exception handler that and
      exception occured was die_notifier.
      
      This is however problematic, as early code might be using jump
      labels even before the notifier registration has been performed,
      which will then lead to an oops due to unhandled exception. One
      of such occurences has been encountered by Fengguang:
      
       int3: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
       Modules linked in:
       CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.11.0-rc1-01429-g04bf576 #8
       task: ffff88000da1b040 ti: ffff88000da1c000 task.ti: ffff88000da1c000
       RIP: 0010:[<ffffffff811098cc>]  [<ffffffff811098cc>] ttwu_do_wakeup+0x28/0x225
       RSP: 0000:ffff88000dd03f10  EFLAGS: 00000006
       RAX: 0000000000000000 RBX: ffff88000dd12940 RCX: ffffffff81769c40
       RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000001
       RBP: ffff88000dd03f28 R08: ffffffff8176a8c0 R09: 0000000000000002
       R10: ffffffff810ff484 R11: ffff88000dd129e8 R12: ffff88000dbc90c0
       R13: ffff88000dbc90c0 R14: ffff88000da1dfd8 R15: ffff88000da1dfd8
       FS:  0000000000000000(0000) GS:ffff88000dd00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
       CR2: 00000000ffffffff CR3: 0000000001c88000 CR4: 00000000000006e0
       Stack:
        ffff88000dd12940 ffff88000dbc90c0 ffff88000da1dfd8 ffff88000dd03f48
        ffffffff81109e2b ffff88000dd12940 0000000000000000 ffff88000dd03f68
        ffffffff81109e9e 0000000000000000 0000000000012940 ffff88000dd03f98
       Call Trace:
        <IRQ>
        [<ffffffff81109e2b>] ttwu_do_activate.constprop.56+0x6d/0x79
        [<ffffffff81109e9e>] sched_ttwu_pending+0x67/0x84
        [<ffffffff8110c845>] scheduler_ipi+0x15a/0x2b0
        [<ffffffff8104dfb4>] smp_reschedule_interrupt+0x38/0x41
        [<ffffffff8173bf5d>] reschedule_interrupt+0x6d/0x80
        <EOI>
        [<ffffffff810ff484>] ? __atomic_notifier_call_chain+0x5/0xc1
        [<ffffffff8105cc30>] ? native_safe_halt+0xd/0x16
        [<ffffffff81015f10>] default_idle+0x147/0x282
        [<ffffffff81017026>] arch_cpu_idle+0x3d/0x5d
        [<ffffffff81127d6a>] cpu_idle_loop+0x46d/0x5db
        [<ffffffff81127f5c>] cpu_startup_entry+0x84/0x84
        [<ffffffff8104f4f8>] start_secondary+0x3c8/0x3d5
        [...]
      
      Fix this by directly calling poke_int3_handler() from the int3
      exception handler (analogically to what ftrace has been doing
      already), instead of relying on notifier, registration of which
      might not have yet been finalized by the time of the first trap.
      Reported-and-tested-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/alpine.LNX.2.00.1307231007490.14024@pobox.suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17f41571
  2. 19 7月, 2013 1 次提交
  3. 17 7月, 2013 1 次提交
  4. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  5. 10 7月, 2013 1 次提交
  6. 04 7月, 2013 2 次提交
    • O
      x86: kill TIF_DEBUG · 37f07655
      Oleg Nesterov 提交于
      Because it is not used.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jan Kratochvil <jan.kratochvil@redhat.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37f07655
    • P
      mm: soft-dirty bits for user memory changes tracking · 0f8975ec
      Pavel Emelyanov 提交于
      The soft-dirty is a bit on a PTE which helps to track which pages a task
      writes to.  In order to do this tracking one should
      
        1. Clear soft-dirty bits from PTEs ("echo 4 > /proc/PID/clear_refs)
        2. Wait some time.
        3. Read soft-dirty bits (55'th in /proc/PID/pagemap2 entries)
      
      To do this tracking, the writable bit is cleared from PTEs when the
      soft-dirty bit is.  Thus, after this, when the task tries to modify a
      page at some virtual address the #PF occurs and the kernel sets the
      soft-dirty bit on the respective PTE.
      
      Note, that although all the task's address space is marked as r/o after
      the soft-dirty bits clear, the #PF-s that occur after that are processed
      fast.  This is so, since the pages are still mapped to physical memory,
      and thus all the kernel does is finds this fact out and puts back
      writable, dirty and soft-dirty bits on the PTE.
      
      Another thing to note, is that when mremap moves PTEs they are marked
      with soft-dirty as well, since from the user perspective mremap modifies
      the virtual memory at mremap's new address.
      Signed-off-by: NPavel Emelyanov <xemul@parallels.com>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0f8975ec
  7. 29 6月, 2013 1 次提交
  8. 27 6月, 2013 5 次提交
  9. 26 6月, 2013 5 次提交
  10. 23 6月, 2013 1 次提交
    • S
      trace,x86: Do not call local_irq_save() in load_current_idt() · 2b4bc789
      Steven Rostedt (Red Hat) 提交于
      As load_current_idt() is now what is used to update the IDT for the
      switches needed for NMI, lockdep debug, and for tracing, it must not
      call local_irq_save(). This is because one of the users of this is
      lockdep, which does tracing of local_irq_save() and when the debug
      trap is hit, we need to update the IDT before tracing interrupts
      being disabled. As load_current_idt() is used to do this, calling
      local_irq_save() which lockdep traces, defeats the point of calling
      load_current_idt().
      
      As interrupts are already disabled when used by lockdep and NMI, the
      only other user is tracing that can disable interrupts itself. Simply
      have the tracing update disable interrupts before calling load_current_idt()
      instead of breaking the other users.
      
      Here's the dump that happened:
      
      ------------[ cut here ]------------
      WARNING: at /work/autotest/nobackup/linux-test.git/kernel/fork.c:1196 copy_process+0x2c3/0x1398()
      DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled)
      Modules linked in:
      CPU: 1 PID: 4570 Comm: gdm-simple-gree Not tainted 3.10.0-rc3-test+ #5
      Hardware name:                  /DG965MQ, BIOS MQ96510J.86A.0372.2006.0605.1717 06/05/2006
       ffffffff81d2a7a5 ffff88006ed13d50 ffffffff8192822b ffff88006ed13d90
       ffffffff81035f25 ffff8800721c6000 ffff88006ed13da0 0000000001200011
       0000000000000000 ffff88006ed5e000 ffff8800721c6000 ffff88006ed13df0
      Call Trace:
       [<ffffffff8192822b>] dump_stack+0x19/0x1b
       [<ffffffff81035f25>] warn_slowpath_common+0x67/0x80
       [<ffffffff81035fe1>] warn_slowpath_fmt+0x46/0x48
       [<ffffffff812bfc5d>] ? __raw_spin_lock_init+0x31/0x52
       [<ffffffff810341f7>] copy_process+0x2c3/0x1398
       [<ffffffff8103539d>] do_fork+0xa8/0x260
       [<ffffffff810ca7b1>] ? trace_preempt_on+0x2a/0x2f
       [<ffffffff812afb3e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
       [<ffffffff81937fe7>] ? sysret_check+0x1b/0x56
       [<ffffffff81937fe7>] ? sysret_check+0x1b/0x56
       [<ffffffff810355cf>] SyS_clone+0x16/0x18
       [<ffffffff81938369>] stub_clone+0x69/0x90
       [<ffffffff81937fc2>] ? system_call_fastpath+0x16/0x1b
      ---[ end trace 8b157a9d20ca1aa2 ]---
      
      in fork.c:
      
       #ifdef CONFIG_PROVE_LOCKING
      	DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled); <-- bug here
      	DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
       #endif
      
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2b4bc789
  11. 21 6月, 2013 9 次提交
    • J
      Revert "crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher" · 99f42f93
      Jussi Kivilinna 提交于
      This reverts commit cf1521a1.
      
      Instruction (vpgatherdd) that this implementation relied on turned out to be
      slow performer on real hardware (i5-4570). The previous 8-way twofish/AVX
      implementation is therefore faster and this implementation should be removed.
      
      Converting this implementation to use the same method as in twofish/AVX for
      table look-ups would give additional ~3% speed up vs twofish/AVX, but would
      hardly be worth of the added code and binary size.
      Signed-off-by: NJussi Kivilinna <jussi.kivilinna@iki.fi>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      99f42f93
    • J
      Revert "crypto: blowfish - add AVX2/x86_64 implementation of blowfish cipher" · 3d387ef0
      Jussi Kivilinna 提交于
      This reverts commit 60488010.
      
      Instruction (vpgatherdd) that this implementation relied on turned out to be
      slow performer on real hardware (i5-4570). The previous 4-way blowfish
      implementation is therefore faster and this implementation should be removed.
      Signed-off-by: NJussi Kivilinna <jussi.kivilinna@iki.fi>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      3d387ef0
    • S
      x86, trace: Add irq vector tracepoints · cf910e83
      Seiji Aguchi 提交于
      [Purpose of this patch]
      
      As Vaibhav explained in the thread below, tracepoints for irq vectors
      are useful.
      
      http://www.spinics.net/lists/mm-commits/msg85707.html
      
      <snip>
      The current interrupt traces from irq_handler_entry and irq_handler_exit
      provide when an interrupt is handled.  They provide good data about when
      the system has switched to kernel space and how it affects the currently
      running processes.
      
      There are some IRQ vectors which trigger the system into kernel space,
      which are not handled in generic IRQ handlers.  Tracing such events gives
      us the information about IRQ interaction with other system events.
      
      The trace also tells where the system is spending its time.  We want to
      know which cores are handling interrupts and how they are affecting other
      processes in the system.  Also, the trace provides information about when
      the cores are idle and which interrupts are changing that state.
      <snip>
      
      On the other hand, my usecase is tracing just local timer event and
      getting a value of instruction pointer.
      
      I suggested to add an argument local timer event to get instruction pointer before.
      But there is another way to get it with external module like systemtap.
      So, I don't need to add any argument to irq vector tracepoints now.
      
      [Patch Description]
      
      Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
      But there is an above use case to trace specific irq_vector rather than tracing all events.
      In this case, we are concerned about overhead due to unwanted events.
      
      So, add following tracepoints instead of introducing irq_vector_entry/exit.
      so that we can enable them independently.
         - local_timer_vector
         - reschedule_vector
         - call_function_vector
         - call_function_single_vector
         - irq_work_entry_vector
         - error_apic_vector
         - thermal_apic_vector
         - threshold_apic_vector
         - spurious_apic_vector
         - x86_platform_ipi_vector
      
      Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
      makes a zero when tracepoints are disabled. Detailed explanations are as follows.
       - Create trace irq handlers with entering_irq()/exiting_irq().
       - Create a new IDT, trace_idt_table, at boot time by adding a logic to
         _set_gate(). It is just a copy of original idt table.
       - Register the new handlers for tracpoints to the new IDT by introducing
         macros to alloc_intr_gate() called at registering time of irq_vector handlers.
       - Add checking, whether irq vector tracing is on/off, into load_current_idt().
         This has to be done below debug checking for these reasons.
         - Switching to debug IDT may be kicked while tracing is enabled.
         - On the other hands, switching to trace IDT is kicked only when debugging
           is disabled.
      
      In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
      used for other purposes.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      cf910e83
    • S
      x86: Rename variables for debugging · 629f4f9d
      Seiji Aguchi 提交于
      Rename variables for debugging to describe meaning of them precisely.
      
      Also, introduce a generic way to switch IDT by checking a current state,
      debug on/off.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C323A8.7050905@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      629f4f9d
    • S
      x86, trace: Introduce entering/exiting_irq() · eddc0e92
      Seiji Aguchi 提交于
      When implementing tracepoints in interrupt handers, if the tracepoints are
      simply added in the performance sensitive path of interrupt handers,
      it may cause potential performance problem due to the time penalty.
      
      To solve the problem, an idea is to prepare non-trace/trace irq handers and
      switch their IDTs at the enabling/disabling time.
      
      So, let's introduce entering_irq()/exiting_irq() for pre/post-
      processing of each irq handler.
      
      A way to use them is as follows.
      
      Non-trace irq handler:
      smp_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      Trace irq_handler:
      smp_trace_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	trace_irq_entry();	/* tracepoint for irq entry */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	trace_irq_exit();	/* tracepoint for irq exit */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      If tracepoints can place outside entering_irq()/exiting_irq() as follows,
      it looks cleaner.
      
      smp_trace_irq_handler()
      {
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      }
      
      But it doesn't work.
      The problem is with irq_enter/exit() being called. They must be called before
      trace_irq_enter/exit(),  because of the rcu_irq_enter() must be called before
      any tracepoints are used, as tracepoints use  rcu to synchronize.
      
      As a possible alternative, we may be able to call irq_enter() first as follows
      if irq_enter() can nest.
      
      smp_trace_irq_hander()
      {
      	irq_entry();
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      	irq_exit();
      }
      
      But it doesn't work, either.
      If irq_enter() is nested, it may have a time penalty because it has to check if it
      was already called or not. The time penalty is not desired in performance sensitive
      paths even if it is tiny.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      eddc0e92
    • B
      x86, fpu: Use static_cpu_has_safe before alternatives · 5f8c4218
      Borislav Petkov 提交于
      The call stack below shows how this happens: basically eager_fpu_init()
      calls __thread_fpu_begin(current) which then does if (!use_eager_fpu()),
      which, in turn, uses static_cpu_has.
      
      And we're executing before alternatives so static_cpu_has doesn't work
      there yet.
      
      Use the safe variant in this path which becomes optimal after
      alternatives have run.
      
      WARNING: at arch/x86/kernel/cpu/common.c:1368 warn_pre_alternatives+0x1e/0x20()
      You're using static_cpu_has before alternatives have run!
      Modules linked in:
      Pid: 0, comm: swapper Not tainted 3.9.0-rc8+ #1
      Call Trace:
       warn_slowpath_common
       warn_slowpath_fmt
       ? fpu_finit
       warn_pre_alternatives
       eager_fpu_init
       fpu_init
       cpu_init
       trap_init
       start_kernel
       ? repair_env_string
       x86_64_start_reservations
       x86_64_start_kernel
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-6-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      5f8c4218
    • B
      x86: Add a static_cpu_has_safe variant · 4a90a99c
      Borislav Petkov 提交于
      We want to use this in early code where alternatives might not have run
      yet and for that case we fall back to the dynamic boot_cpu_has.
      
      For that, force a 5-byte jump since the compiler could be generating
      differently sized jumps for each label.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-5-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      4a90a99c
    • B
      x86: Sanity-check static_cpu_has usage · 5700f743
      Borislav Petkov 提交于
      static_cpu_has may be used only after alternatives have run. Before that
      it always returns false if constant folding with __builtin_constant_p()
      doesn't happen. And you don't want that.
      
      This patch is the result of me debugging an issue where I overzealously
      put static_cpu_has in code which executed before alternatives have run
      and had to spend some time with scratching head and cursing at the
      monitor.
      
      So add a jump to a warning which screams loudly when we use this
      function too early. The alternatives patch that check away in
      conjunction with patching the rest of the kernel image.
      
      [ hpa: factored this into its own configuration option.  If we want to
        have an overarching option, it should be an option which selects
        other options, not as a group option in the source code. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-4-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      5700f743
    • B
      x86, cpu: Add a synthetic, always true, cpu feature · c3b83598
      Borislav Petkov 提交于
      This will be used in alternatives later as an always-replace flag.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/1370772454-6106-2-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      c3b83598
  12. 20 6月, 2013 3 次提交
    • M
      x86: Fix trigger_all_cpu_backtrace() implementation · b52e0a7c
      Michel Lespinasse 提交于
      The following change fixes the x86 implementation of
      trigger_all_cpu_backtrace(), which was previously (accidentally,
      as far as I can tell) disabled to always return false as on
      architectures that do not implement this function.
      
      trigger_all_cpu_backtrace(), as defined in include/linux/nmi.h,
      should call arch_trigger_all_cpu_backtrace() if available, or
      return false if the underlying arch doesn't implement this
      function.
      
      x86 did provide a suitable arch_trigger_all_cpu_backtrace()
      implementation, but it wasn't actually being used because it was
      declared in asm/nmi.h, which linux/nmi.h doesn't include. Also,
      linux/nmi.h couldn't easily be fixed by including asm/nmi.h,
      because that file is not available on all architectures.
      
      I am proposing to fix this by moving the x86 definition of
      arch_trigger_all_cpu_backtrace() to asm/irq.h.
      
      Tested via: echo l > /proc/sysrq-trigger
      
      Before the change, this uses a fallback implementation which
      shows backtraces on active CPUs (using
      smp_call_function_interrupt() )
      
      After the change, this shows NMI backtraces on all CPUs
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1370518875-1346-1-git-send-email-walken@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b52e0a7c
    • P
      x86: Fix section mismatch on load_ucode_ap · 94978599
      Paul Gortmaker 提交于
      We are in the process of removing all the __cpuinit annotations.
      While working on making that change, an existing problem was
      made evident:
      
        WARNING: arch/x86/kernel/built-in.o(.text+0x198f2): Section mismatch
        in reference from the function cpu_init() to the function
        .init.text:load_ucode_ap()   The function cpu_init() references
        the function __init load_ucode_ap().  This is often because cpu_init
        lacks a __init annotation or the annotation of load_ucode_ap is wrong.
      
      This now appears because in my working tree, cpu_init() is no longer
      tagged as __cpuinit, and so the audit picks up the mismatch.  The 2nd
      hypothesis from the audit is the correct one, as there was an incorrect
      __init tag on the prototype in the header (but __cpuinit was used on
      the function itself.)
      
      The audit is telling us that the prototype's __init annotation took
      effect and the function did land in the .init.text section.  Checking
      with objdump on a mainline tree that still has __cpuinit shows that
      the __cpuinit on the function takes precedence over the __init on the
      prototype, but that won't be true once we make __cpuinit a no-op.
      
      Even though we are removing __cpuinit, we temporarily align both
      the function and the prototype on __cpuinit so that the changeset
      can be applied to stable trees  if desired.
      
      [ hpa: build fix only, no object code change ]
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: stable <stable@vger.kernel.org> # 3.9+
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Link: http://lkml.kernel.org/r/1371654926-11729-1-git-send-email-paul.gortmaker@windriver.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      94978599
    • K
      x86 / ACPI / sleep: Provide registration for acpi_suspend_lowlevel. · d6a77ead
      Konrad Rzeszutek Wilk 提交于
      Which by default will be x86_acpi_suspend_lowlevel.
      This registration allows us to register another callback
      if there is a need to use another platform specific callback.
      Signed-off-by: NLiang Tang <liang.tang@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Tested-by: NBen Guthro <benjamin.guthro@citrix.com>
      Acked-by: N"H. Peter Anvin" <hpa@linux.intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      d6a77ead
  13. 19 6月, 2013 1 次提交
  14. 13 6月, 2013 1 次提交
  15. 11 6月, 2013 2 次提交
    • B
      efi: Convert runtime services function ptrs · 43ab0476
      Borislav Petkov 提交于
      ... to void * like the boot services and lose all the void * casts. No
      functionality change.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      43ab0476
    • M
      Modify UEFI anti-bricking code · f8b84043
      Matthew Garrett 提交于
      This patch reworks the UEFI anti-bricking code, including an effective
      reversion of cc5a080c and 31ff2f20. It turns out that calling
      QueryVariableInfo() from boot services results in some firmware
      implementations jumping to physical addresses even after entering virtual
      mode, so until we have 1:1 mappings for UEFI runtime space this isn't
      going to work so well.
      
      Reverting these gets us back to the situation where we'd refuse to create
      variables on some systems because they classify deleted variables as "used"
      until the firmware triggers a garbage collection run, which they won't do
      until they reach a lower threshold. This results in it being impossible to
      install a bootloader, which is unhelpful.
      
      Feedback from Samsung indicates that the firmware doesn't need more than
      5KB of storage space for its own purposes, so that seems like a reasonable
      threshold. However, there's still no guarantee that a platform will attempt
      garbage collection merely because it drops below this threshold. It seems
      that this is often only triggered if an attempt to write generates a
      genuine EFI_OUT_OF_RESOURCES error. We can force that by attempting to
      create a variable larger than the remaining space. This should fail, but if
      it somehow succeeds we can then immediately delete it.
      
      I've tested this on the UEFI machines I have available, but I don't have
      a Samsung and so can't verify that it avoids the bricking problem.
      Signed-off-by: NMatthew Garrett <matthew.garrett@nebula.com>
      Signed-off-by: Lee, Chun-Y <jlee@suse.com> [ dummy variable cleanup ]
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      f8b84043
  16. 07 6月, 2013 1 次提交
  17. 05 6月, 2013 4 次提交