1. 30 1月, 2016 1 次提交
  2. 19 5月, 2015 1 次提交
    • I
      x86/alternatives, x86/fpu: Add 'alternatives_patched' debug flag and use it in xsave_state() · 5e907bb0
      Ingo Molnar 提交于
      We'd like to use xsave_state() earlier, but its SYSTEM_BOOTING check
      is too imprecise.
      
      The real condition that xsave_state() would like to check is whether
      alternative XSAVE instructions were patched into the kernel image
      already.
      
      Add such a (read-mostly) debug flag and use it in xsave_state().
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5e907bb0
  3. 04 4月, 2015 1 次提交
    • B
      x86/alternatives: Fix ALTERNATIVE_2 padding generation properly · dbe4058a
      Borislav Petkov 提交于
      Quentin caught a corner case with the generation of instruction
      padding in the ALTERNATIVE_2 macro: if len(orig_insn) <
      len(alt1) < len(alt2), then not enough padding gets added and
      that is not good(tm) as we could overwrite the beginning of the
      next instruction.
      
      Luckily, at the time of this writing, we don't have
      ALTERNATIVE_2() invocations which have that problem and even if
      we did, a simple fix would be to prepend the instructions with
      enough prefixes so that that corner case doesn't happen.
      
      However, best it would be if we fixed it properly. See below for
      a simple, abstracted example of what we're doing.
      
      So what we ended up doing is, we compute the
      
      	max(len(alt1), len(alt2)) - len(orig_insn)
      
      and feed that value to the .skip gas directive. The max() cannot
      have conditionals due to gas limitations, thus the fancy integer
      math.
      
      With this patch, all ALTERNATIVE_2 sites get padded correctly;
      generating obscure test cases pass too:
      
        #define alt_max_short(a, b)    ((a) ^ (((a) ^ (b)) & -(-((a) < (b)))))
      
        #define gen_skip(orig, alt1, alt2, marker)	\
        	.skip -((alt_max_short(alt1, alt2) - (orig)) > 0) * \
        		(alt_max_short(alt1, alt2) - (orig)),marker
      
        	.pushsection .text, "ax"
        .globl main
        main:
        	gen_skip(1, 2, 4, 0x09)
        	gen_skip(4, 1, 2, 0x10)
        	...
        	.popsection
      
      Thanks to Quentin for catching it and double-checking the fix!
      Reported-by: NQuentin Casasnovas <quentin.casasnovas@oracle.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150404133443.GE21152@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dbe4058a
  4. 23 2月, 2015 1 次提交
    • B
      x86/alternatives: Add instruction padding · 4332195c
      Borislav Petkov 提交于
      Up until now we have always paid attention to make sure the length of
      the new instruction replacing the old one is at least less or equal to
      the length of the old instruction. If the new instruction is longer, at
      the time it replaces the old instruction it will overwrite the beginning
      of the next instruction in the kernel image and cause your pants to
      catch fire.
      
      So instead of having to pay attention, teach the alternatives framework
      to pad shorter old instructions with NOPs at buildtime - but only in the
      case when
      
        len(old instruction(s)) < len(new instruction(s))
      
      and add nothing in the >= case. (In that case we do add_nops() when
      patching).
      
      This way the alternatives user shouldn't have to care about instruction
      sizes and simply use the macros.
      
      Add asm ALTERNATIVE* flavor macros too, while at it.
      
      Also, we need to save the pad length in a separate struct alt_instr
      member for NOP optimization and the way to do that reliably is to carry
      the pad length instead of trying to detect whether we're looking at
      single-byte NOPs or at pathological instruction offsets like e9 90 90 90
      90, for example, which is a valid instruction.
      
      Thanks to Michael Matz for the great help with toolchain questions.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      4332195c
  5. 30 5月, 2014 1 次提交
  6. 23 7月, 2013 1 次提交
    • J
      kprobes/x86: Call out into INT3 handler directly instead of using notifier · 17f41571
      Jiri Kosina 提交于
      In fd4363ff ("x86: Introduce int3 (breakpoint)-based
      instruction patching"), the mechanism that was introduced for
      notifying alternatives code from int3 exception handler that and
      exception occured was die_notifier.
      
      This is however problematic, as early code might be using jump
      labels even before the notifier registration has been performed,
      which will then lead to an oops due to unhandled exception. One
      of such occurences has been encountered by Fengguang:
      
       int3: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
       Modules linked in:
       CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.11.0-rc1-01429-g04bf576 #8
       task: ffff88000da1b040 ti: ffff88000da1c000 task.ti: ffff88000da1c000
       RIP: 0010:[<ffffffff811098cc>]  [<ffffffff811098cc>] ttwu_do_wakeup+0x28/0x225
       RSP: 0000:ffff88000dd03f10  EFLAGS: 00000006
       RAX: 0000000000000000 RBX: ffff88000dd12940 RCX: ffffffff81769c40
       RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000001
       RBP: ffff88000dd03f28 R08: ffffffff8176a8c0 R09: 0000000000000002
       R10: ffffffff810ff484 R11: ffff88000dd129e8 R12: ffff88000dbc90c0
       R13: ffff88000dbc90c0 R14: ffff88000da1dfd8 R15: ffff88000da1dfd8
       FS:  0000000000000000(0000) GS:ffff88000dd00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
       CR2: 00000000ffffffff CR3: 0000000001c88000 CR4: 00000000000006e0
       Stack:
        ffff88000dd12940 ffff88000dbc90c0 ffff88000da1dfd8 ffff88000dd03f48
        ffffffff81109e2b ffff88000dd12940 0000000000000000 ffff88000dd03f68
        ffffffff81109e9e 0000000000000000 0000000000012940 ffff88000dd03f98
       Call Trace:
        <IRQ>
        [<ffffffff81109e2b>] ttwu_do_activate.constprop.56+0x6d/0x79
        [<ffffffff81109e9e>] sched_ttwu_pending+0x67/0x84
        [<ffffffff8110c845>] scheduler_ipi+0x15a/0x2b0
        [<ffffffff8104dfb4>] smp_reschedule_interrupt+0x38/0x41
        [<ffffffff8173bf5d>] reschedule_interrupt+0x6d/0x80
        <EOI>
        [<ffffffff810ff484>] ? __atomic_notifier_call_chain+0x5/0xc1
        [<ffffffff8105cc30>] ? native_safe_halt+0xd/0x16
        [<ffffffff81015f10>] default_idle+0x147/0x282
        [<ffffffff81017026>] arch_cpu_idle+0x3d/0x5d
        [<ffffffff81127d6a>] cpu_idle_loop+0x46d/0x5db
        [<ffffffff81127f5c>] cpu_startup_entry+0x84/0x84
        [<ffffffff8104f4f8>] start_secondary+0x3c8/0x3d5
        [...]
      
      Fix this by directly calling poke_int3_handler() from the int3
      exception handler (analogically to what ftrace has been doing
      already), instead of relying on notifier, registration of which
      might not have yet been finalized by the time of the first trap.
      Reported-and-tested-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/alpine.LNX.2.00.1307231007490.14024@pobox.suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17f41571
  7. 19 7月, 2013 1 次提交
  8. 17 7月, 2013 1 次提交
  9. 22 9月, 2012 1 次提交
  10. 23 8月, 2012 1 次提交
    • R
      x86/smp: Don't ever patch back to UP if we unplug cpus · 816afe4f
      Rusty Russell 提交于
      We still patch SMP instructions to UP variants if we boot with a
      single CPU, but not at any other time.  In particular, not if we
      unplug CPUs to return to a single cpu.
      
      Paul McKenney points out:
      
       mean offline overhead is 6251/48=130.2 milliseconds.
      
       If I remove the alternatives_smp_switch() from the offline
       path [...] the mean offline overhead is 550/42=13.1 milliseconds
      
      Basically, we're never going to get those 120ms back, and the
      code is pretty messy.
      
      We get rid of:
      
       1) The "smp-alt-once" boot option. It's actually "smp-alt-boot", the
          documentation is wrong. It's now the default.
      
       2) The skip_smp_alternatives flag used by suspend.
      
       3) arch_disable_nonboot_cpus_begin() and arch_disable_nonboot_cpus_end()
          which were only used to set this one flag.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Paul McKenney <paul.mckenney@us.ibm.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/87vcgwwive.fsf@rustcorp.com.auSigned-off-by: NIngo Molnar <mingo@kernel.org>
      816afe4f
  11. 30 6月, 2012 1 次提交
  12. 21 1月, 2012 1 次提交
    • J
      x86: Adjust asm constraints in atomic64 wrappers · 819165fb
      Jan Beulich 提交于
      Eric pointed out overly restrictive constraints in atomic64_set(), but
      there are issues throughout the file. In the cited case, %ebx and %ecx
      are inputs only (don't get changed by either of the two low level
      implementations). This was also the case elsewhere.
      
      Further in many cases early-clobber indicators were missing.
      
      Finally, the previous implementation rolled a custom alternative
      instruction macro from scratch, rather than using alternative_call()
      (which was introduced with the commit that the description of the
      change in question actually refers to). Adjusting has the benefit of
      not hiding referenced symbols from the compiler, which however requires
      them to be declared not just in the exporting source file (which, as a
      desirable side effect, in turn allows that exporting file to become a
      real 5-line stub).
      
      This patch does not eliminate the overly restrictive memory clobbers,
      however: Doing so would occasionally make the compiler set up a second
      register for accessing the memory object (to satisfy the added "m"
      constraint), and it's not clear which of the two non-optimal
      alternatives is better.
      
      v2: Re-do the declaration and exporting of the internal symbols.
      Reported-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Link: http://lkml.kernel.org/r/4F19A2A5020000780006E0D9@nat28.tlf.novell.com
      Cc: Luca Barbieri <luca@luca-barbieri.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      819165fb
  13. 16 9月, 2011 1 次提交
    • L
      asm alternatives: remove incorrect alignment notes · a7f934d4
      Linus Torvalds 提交于
      On x86-64, they were just wasteful: with the explicitly added (now
      unnecessary) padding, the size of the alternatives structure was 16
      bytes, and an alignment of 8 bytes didn't hurt much.
      
      However, it was still silly, since the natural size and alignment for
      the structure is actually just 12 bytes, 4-byte aligned since commit
      59e97e4d ("x86: Make alternative instruction pointers relative").
      So removing the padding, and removing the extra alignment is just a good
      idea.
      
      On x86-32, the alignment of 4 bytes was correct, but was incorrectly
      hardcoded as 8 bytes in <asm/alternative-asm.h>.  That header file had
      used to be an x86-64 only header file, but various unification efforts
      have made it be used for x86-32 too (ie the unification of rwlock and
      rwsem).
      
      That in turn caused x86-32 boot failures, because the extra alignment
      would result in random zero-filled words in the altinstructions section,
      causing oopses early at boot when doing alternative instruction
      replacement.
      
      So just remove all the alignment noise entirely.  It's wrong, and it's
      unnecessary.  The section itself is already properly aligned by the
      linker scripts, and all additions to the section had better be of the
      proper 12-byte format, keeping it aligned.  So if the align directive
      were to ever make a difference, that would be an indication of a serious
      bug to begin with.
      Reported-by: NWerner Landgraf <w.landgraf@ru.r>
      Acked-by: NAndrew Lutomirski <luto@mit.edu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a7f934d4
  14. 14 7月, 2011 1 次提交
  15. 19 4月, 2011 1 次提交
  16. 05 4月, 2011 1 次提交
    • J
      jump label: Introduce static_branch() interface · d430d3d7
      Jason Baron 提交于
      Introduce:
      
      static __always_inline bool static_branch(struct jump_label_key *key);
      
      instead of the old JUMP_LABEL(key, label) macro.
      
      In this way, jump labels become really easy to use:
      
      Define:
      
              struct jump_label_key jump_key;
      
      Can be used as:
      
              if (static_branch(&jump_key))
                      do unlikely code
      
      enable/disale via:
      
              jump_label_inc(&jump_key);
              jump_label_dec(&jump_key);
      
      that's it!
      
      For the jump labels disabled case, the static_branch() becomes an
      atomic_read(), and jump_label_inc()/dec() are simply atomic_inc(),
      atomic_dec() operations. We show testing results for this change below.
      
      Thanks to H. Peter Anvin for suggesting the 'static_branch()' construct.
      
      Since we now require a 'struct jump_label_key *key', we can store a pointer into
      the jump table addresses. In this way, we can enable/disable jump labels, in
      basically constant time. This change allows us to completely remove the previous
      hashtable scheme. Thanks to Peter Zijlstra for this re-write.
      
      Testing:
      
      I ran a series of 'tbench 20' runs 5 times (with reboots) for 3
      configurations, where tracepoints were disabled.
      
      jump label configured in
      avg: 815.6
      
      jump label *not* configured in (using atomic reads)
      avg: 800.1
      
      jump label *not* configured in (regular reads)
      avg: 803.4
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20110316212947.GA8792@redhat.com>
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Suggested-by: NH. Peter Anvin <hpa@linux.intel.com>
      Tested-by: NDavid Daney <ddaney@caviumnetworks.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d430d3d7
  17. 14 12月, 2010 1 次提交
  18. 07 12月, 2010 1 次提交
    • M
      x86: Introduce text_poke_smp_batch() for batch-code modifying · 7deb18dc
      Masami Hiramatsu 提交于
      Introduce text_poke_smp_batch(). This function modifies several
      text areas with one stop_machine() on SMP. Because calling
      stop_machine() is heavy task, it is better to aggregate
      text_poke requests.
      
      ( Note: I've talked with Rusty about this interface, and
        he would not like to expand stop_machine() interface, since
        it is not for generic use. )
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Jan Beulich <jbeulich@novell.com>
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <20101203095422.2961.51217.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7deb18dc
  19. 23 9月, 2010 1 次提交
    • J
      jump label: Base patch for jump label · bf5438fc
      Jason Baron 提交于
      base patch to implement 'jump labeling'. Based on a new 'asm goto' inline
      assembly gcc mechanism, we can now branch to labels from an 'asm goto'
      statment. This allows us to create a 'no-op' fastpath, which can subsequently
      be patched with a jump to the slowpath code. This is useful for code which
      might be rarely used, but which we'd like to be able to call, if needed.
      Tracepoints are the current usecase that these are being implemented for.
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      LKML-Reference: <ee8b3595967989fdaf84e698dc7447d315ce972a.1284733808.git.jbaron@redhat.com>
      
      [ cleaned up some formating ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      bf5438fc
  20. 21 9月, 2010 2 次提交
  21. 08 7月, 2010 1 次提交
    • H
      x86, alternatives: Use 16-bit numbers for cpufeature index · 83a7a2ad
      H. Peter Anvin 提交于
      We already have cpufeature indicies above 255, so use a 16-bit number
      for the alternatives index.  This consumes a padding field and so
      doesn't add any size, but it means that abusing the padding field to
      create assembly errors on overflow no longer works.  We can retain the
      test simply by redirecting it to the .discard section, however.
      
      [ v3: updated to include open-coded locations ]
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      LKML-Reference: <tip-f88731e3068f9d1392ba71cc9f50f035d26a0d4f@git.kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      83a7a2ad
  22. 30 4月, 2010 1 次提交
    • H
      x86: Fix LOCK_PREFIX_HERE for uniprocessor build · b701a47b
      H. Peter Anvin 提交于
      Checkin b3ac891b:
      x86: Add support for lock prefix in alternatives
      
      ... did not define LOCK_PREFIX_HERE in the case of a uniprocessor
      build.  As a result, it would cause any of the usages of this macro to
      fail on a uniprocessor build.  Fix this by defining LOCK_PREFIX_HERE
      as a null string.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Luca Barbieri <luca@luca-barbieri.com>
      LKML-Reference: <1267005265-27958-2-git-send-email-luca@luca-barbieri.com>
      b701a47b
  23. 29 4月, 2010 1 次提交
  24. 07 4月, 2010 1 次提交
    • B
      x86: Add optimized popcnt variants · d61931d8
      Borislav Petkov 提交于
      Add support for the hardware version of the Hamming weight function,
      popcnt, present in CPUs which advertize it under CPUID, Function
      0x0000_0001_ECX[23]. On CPUs which don't support it, we fallback to the
      default lib/hweight.c sw versions.
      
      A synthetic benchmark comparing popcnt with __sw_hweight64 showed almost
      a 3x speedup on a F10h machine.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <20100318112015.GC11152@aftab>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d61931d8
  25. 26 2月, 2010 2 次提交
    • L
      x86: Add support for lock prefix in alternatives · b3ac891b
      Luca Barbieri 提交于
      The current lock prefix UP/SMP alternative code doesn't allow
      LOCK_PREFIX to be used in alternatives code.
      
      This patch solves the problem by adding a new LOCK_PREFIX_ALTERNATIVE_PATCH
      macro that only records the lock prefix location but does not emit
      the prefix.
      
      The user of this macro can then start any alternative sequence with
      "lock" and have it UP/SMP patched.
      
      To make this work, the UP/SMP alternative code is changed to do the
      lock/DS prefix switching only if the byte actually contains a lock or
      DS prefix.
      
      Thus, if an alternative without the "lock" is selected, it will now do
      nothing instead of clobbering the code.
      
      Changes in v2:
      - Naming change
      - Change label to not conflict with alternatives
      Signed-off-by: NLuca Barbieri <luca@luca-barbieri.com>
      LKML-Reference: <1267005265-27958-2-git-send-email-luca@luca-barbieri.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      b3ac891b
    • M
      x86: Add text_poke_smp for SMP cross modifying code · 3d55cc8a
      Masami Hiramatsu 提交于
      Add generic text_poke_smp for SMP which uses stop_machine()
      to synchronize modifying code.
      This stop_machine() method is officially described at "7.1.3
      Handling Self- and Cross-Modifying Code" on the intel's
      software developer's manual 3A.
      
      Since stop_machine() can't protect code against NMI/MCE, this
      function can not modify those handlers. And also, this function
      is basically for modifying multibyte-single-instruction. For
      modifying multibyte-multi-instructions, we need another special
      trap & detour code.
      
      This code originaly comes from immediate values with
      stop_machine() version. Thanks Jason and Mathieu!
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133438.6725.80273.stgit@localhost6.localdomain6>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3d55cc8a
  26. 04 2月, 2010 1 次提交
    • M
      ftrace/alternatives: Introducing *_text_reserved functions · 2cfa1978
      Masami Hiramatsu 提交于
      Introducing *_text_reserved functions for checking the text
      address range is partially reserved or not. This patch provides
      checking routines for x86 smp alternatives and dynamic ftrace.
      Since both functions modify fixed pieces of kernel text, they
      should reserve and protect those from other dynamic text
      modifier, like kprobes.
      
      This will also be extended when introducing other subsystems
      which modify fixed pieces of kernel text. Dynamic text modifiers
      should avoid those.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: przemyslaw@pawelczyk.it
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Jason Baron <jbaron@redhat.com>
      LKML-Reference: <20100202214911.4694.16587.stgit@dhcp-100-2-132.bos.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2cfa1978
  27. 30 12月, 2009 1 次提交
    • J
      x86-64: Modify copy_user_generic() alternatives mechanism · 1b1d9258
      Jan Beulich 提交于
      In order to avoid unnecessary chains of branches, rather than
      implementing copy_user_generic() as a function consisting of
      just a single (possibly patched) branch, instead properly deal
      with patching call instructions in the alternative instructions
      framework, and move the patching into the callers.
      
      As a follow-on, one could also introduce something like
      __EXPORT_SYMBOL_ALT() to avoid patching call sites in modules.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4B2BB8180200007800026AE7@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1b1d9258
  28. 02 12月, 2009 1 次提交
    • J
      x86/alternatives: Check replacementlen <= instrlen at build time · 01be50a3
      Jan Beulich 提交于
      Having run into the run-(boot-)time check a couple of times lately,
      I finally took time to find a build-time check so that one doesn't
      need to analyze the register/stack dump and resolve this (through
      manual lookup in vmlinux) to the offending construct.
      
      The assembler will emit a message like "Error: value of <num> too
      large for field of 1 bytes at <offset>", which while not pointing
      out the source location still makes analysis quite a bit easier.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      LKML-Reference: <4B0FF8AA0200007800022703@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      01be50a3
  29. 22 8月, 2009 1 次提交
  30. 28 4月, 2009 1 次提交
    • M
      x86: clean up alternative.h · edc953fa
      Mathieu Desnoyers 提交于
      Alternative header duplicates assembly that could be merged in
      one single macro.  Merging this into this macro also allows to
      directly declare ALTERNATIVE() statements within assembly code.
      
      Uses a __stringify() of the feature bits rather than passing a
      "i" operand.  Leave the old %0 operand as-is (set to 0), unused
      to stay compatible with API.
      
      (v2: tab alignment fixes)
      
      [ Impact: cleanup ]
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      LKML-Reference: <20090428151346.GA31212@Krystal>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      edc953fa
  31. 23 10月, 2008 2 次提交
  32. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  33. 24 5月, 2008 1 次提交
  34. 17 4月, 2008 2 次提交
    • J
      2ac1ea7c
    • M
      x86: enhance DEBUG_RODATA support - alternatives · e587cadd
      Mathieu Desnoyers 提交于
      Fix a memcpy that should be a text_poke (in apply_alternatives).
      
      Use kernel_wp_save/kernel_wp_restore in text_poke to support DEBUG_RODATA
      correctly and so the CPU HOTPLUG special case can be removed.
      
      Add text_poke_early, for alternatives and paravirt boot-time and module load
      time patching.
      
      Changelog:
      
      - Fix text_set and text_poke alignment check (mixed up bitwise and and or)
      - Remove text_set
      - Export add_nops, so it can be used by others.
      - Document text_poke_early.
      - Remove clflush, since it breaks some VIA architectures and is not strictly
        necessary.
      - Add kerneldoc to text_poke and text_poke_early.
      - Create a second vmap instead of using the WP bit to support Xen and VMI.
      - Move local_irq disable within text_poke and text_poke_early to be able to
        be sleepable in these functions.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      CC: Andi Kleen <andi@firstfloor.org>
      CC: pageexec@freemail.hu
      CC: H. Peter Anvin <hpa@zytor.com>
      CC: Jeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e587cadd
  35. 30 1月, 2008 1 次提交
  36. 11 10月, 2007 1 次提交