1. 21 9月, 2009 1 次提交
  2. 19 9月, 2009 4 次提交
  3. 25 8月, 2009 1 次提交
    • J
      x86: Fix build with older binutils and consolidate linker script · c62e4320
      Jan Beulich 提交于
      binutils prior to 2.17 can't deal with the currently possible
      situation of a new segment following the per-CPU segment, but
      that new segment being empty - objcopy misplaces the .bss (and
      perhaps also the .brk) sections outside of any segment.
      
      However, the current ordering of sections really just appears
      to be the effect of cumulative unrelated changes; re-ordering
      things allows to easily guarantee that the segment following
      the per-CPU one is non-empty, and at once eliminates the need
      for the bogus data.init2 segment.
      
      Once touching this code, also use the various data section
      helper macros from include/asm-generic/vmlinux.lds.h.
      
      -v2: fix !SMP builds.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: <sam@ravnborg.org>
      LKML-Reference: <4A94085D02000078000119A5@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c62e4320
  4. 04 8月, 2009 1 次提交
  5. 18 7月, 2009 1 次提交
  6. 09 7月, 2009 1 次提交
    • T
      linker script: unify usage of discard definition · 023bf6f1
      Tejun Heo 提交于
      Discarded sections in different archs share some commonality but have
      considerable differences.  This led to linker script for each arch
      implementing its own /DISCARD/ definition, which makes maintaining
      tedious and adding new entries error-prone.
      
      This patch makes all linker scripts to move discard definitions to the
      end of the linker script and use the common DISCARDS macro.  As ld
      uses the first matching section definition, archs can include default
      discarded sections by including them earlier in the linker script.
      
      ia64 is notable because it first throws away some ia64 specific
      subsections and then include the rest of the sections into the final
      image, so those sections must be discarded before the inclusion.
      
      defconfig compile tested for x86, x86-64, powerpc, powerpc64, ia64,
      alpha, sparc, sparc64 and s390.  Michal Simek tested microblaze.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: NMike Frysinger <vapier@gentoo.org>
      Tested-by: NMichal Simek <monstr@monstr.eu>
      Cc: linux-arch@vger.kernel.org
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: microblaze-uclinux@itee.uq.edu.au
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Tony Luck <tony.luck@intel.com>
      023bf6f1
  7. 12 6月, 2009 1 次提交
  8. 29 4月, 2009 14 次提交
    • I
      x86, vmlinux.lds: fix relocatable symbols · fd073194
      Ingo Molnar 提交于
      __init_begin/_end symbols should be inside sections as well,
      otherwise the relocatable kernel gets confused when freeing
      init sections in the wrong place.
      
      [ Impact: fix bootup crash ]
      
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20090429105056.GA28720@uranus.ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fd073194
    • I
      x86, vmlinux.lds: add copyright · 91fd7fe8
      Ingo Molnar 提交于
      Acked-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-2-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      91fd7fe8
    • S
      x86, vmlinux.lds: unify remaining parts · 091e52c3
      Sam Ravnborg 提交于
      32 bit:
      - explicit page align .bss
      - move ALING() out of .brk output section
      - discard *(.eh_frame)
      
      64 bit:
      - move ALIGN() out of .bss output section
      - move ALIGN() out of .brk output section
      - use a dedicated section to define _end
      
      [ Impact: unify and fix section alignments in linker script ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-13-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      091e52c3
    • S
      x86, vmlinux.lds: unify percpu · 9d16e783
      Sam Ravnborg 提交于
      32 bit:
      - move __init_end outside the .bss output section
        It really did not belong in there
      
      [ Impact: 64-bit: cleanup, 32-bit: refactor linker script ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-12-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9d16e783
    • S
      x86, vmlinux.lds: unify .exit.* and .init.ramfs · bf6a5741
      Sam Ravnborg 提交于
      [ Impact: cleanup ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-11-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bf6a5741
    • S
      x86, vmlinux.lds: unify parainstructions · ae618362
      Sam Ravnborg 提交于
      32 bit:
      
       - increase alignment from 4 to 8 for .parainstructions
       - increase alignment from 4 to 8 for .altinstructions
      
      64 bit:
      
       - move ALIGN() outside output section for .altinstructions
      
      None of the above should result in any functional change.
      
      [ Impact: refactor and unify linker script ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-10-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ae618362
    • S
      x86, vmlinux.lds: unify first part of initdata · e58bdaa8
      Sam Ravnborg 提交于
      32-bit:
      
       - Move definition of __init_begin outside output_section
         because it covers more than one section
       - Move ALIGN() for end-of-section inside .smp_locks output section.
         Same effect but the intent is better documented that
         we need both start and end aligned.
      
      64-bit:
      
       - Move ALIGN() outside output section in .init.setup
       - Deleted unused __smp_alt_* symbols
      
      None of the above should result in any functional change.
      
      [ Impact: refactor and unify linker script ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-9-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e58bdaa8
    • S
      x86, vmlinux.lds: move vsyscall output sections · ff6f87e1
      Sam Ravnborg 提交于
      [ Impact: cleanup ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-8-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ff6f87e1
    • S
      x86, vmlinux.lds: unify data output sections · 1f6397ba
      Sam Ravnborg 提交于
      For 64 bit the following functional changes are introduced:
      
       - .data.page_aligned has moved
       - .data.cacheline_aligned has moved
       - .data.read_mostly has moved
       - ALIGN() moved out of output section for .data.cacheline_aligned
       - ALIGN() moved out of output section for .data.page_aligned
      
      Notice that 32 bit and 64 bit has different location of _edata.
      .data_nosave is 32 bit only as 64 bit is special due to PERCPU.
      
      [ Impact: 32-bit: cleanup, 64-bit: use 32-bit linker script ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-7-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1f6397ba
    • S
      x86, vmlinux.lds: unify exception table · 448bc3ab
      Sam Ravnborg 提交于
      [ Impact: cleanup ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-6-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      448bc3ab
    • S
      x86, vmlinux.lds: unify .text output sections · dfc20895
      Sam Ravnborg 提交于
      32 bit x86 had a dedicated .text.head output section,
      whereas 64 bit had it all in a single output section.
      
      In the unified version the dedicated .text.head output section
      was kept to have full control over the head code.
      
      32 bit:
      
      - Moved definition of _stext to the linker script.
        The definition is located _after_ .text.page_aligned as this
        is what 32 bit did before.
      
      The ALIGN(8) was introduced so we hit the exact same address
      (on the tested config) before and after the move.
      
      I assume that it is a bug that _stext did not cover the
      .text.page_aligned section - if this is true it can be fixed
      in a follow-up patch (and the ugly ALIGN() can be dropped).
      
      [ Impact: 64-bit: cleanup, 32-bit: use the 64-bit linker script ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-5-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dfc20895
    • S
      x86, vmlinux.lds: unify start/end of SECTIONS · 444e0ae4
      Sam Ravnborg 提交于
      [ Impact: cleanup ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-4-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      444e0ae4
    • S
      x86, vmlinux.lds: unify PHDRS · afb8095a
      Sam Ravnborg 提交于
      PHDRS are not equal for the two - so
      use ifdefs to cover up for that.
      
      On the assumption that they may become equal the ifdef
      is inside the PHDRS definiton.
      
      [ Impact: cleanup ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-3-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      afb8095a
    • S
      x86, vmlinux.lds: unify header/footer · 17ce265d
      Sam Ravnborg 提交于
      Merge everything except PHDRS and SECTIONS into
      vmlinux.lds.S.
      
      [ Impact: cleanup ]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Tim Abbott <tabbott@MIT.EDU>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1240991249-27117-2-git-send-email-sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      17ce265d
  9. 11 10月, 2007 2 次提交
  10. 20 7月, 2007 2 次提交
    • R
      i386: Put allocated ELF notes in read-only data segment · cbe87121
      Roland McGrath 提交于
      This changes the i386 linker script and the asm-generic macro it uses so that
      ELF note sections with SHF_ALLOC set are linked into the kernel image along
      with other read-only data.  The PT_NOTE also points to their location.
      
      This paves the way for putting useful build-time information into ELF notes
      that can be found easily later in a kernel memory dump.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cbe87121
    • F
      define new percpu interface for shared data · 5fb7dc37
      Fenghua Yu 提交于
      per cpu data section contains two types of data.  One set which is
      exclusively accessed by the local cpu and the other set which is per cpu,
      but also shared by remote cpus.  In the current kernel, these two sets are
      not clearely separated out.  This can potentially cause the same data
      cacheline shared between the two sets of data, which will result in
      unnecessary bouncing of the cacheline between cpus.
      
      One way to fix the problem is to cacheline align the remotely accessed per
      cpu data, both at the beginning and at the end.  Because of the padding at
      both ends, this will likely cause some memory wastage and also the
      interface to achieve this is not clean.
      
      This patch:
      
      Moves the remotely accessed per cpu data (which is currently marked
      as ____cacheline_aligned_in_smp) into a different section, where all the data
      elements are cacheline aligned. And as such, this differentiates the local
      only data and remotely accessed data cleanly.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5fb7dc37
  11. 18 7月, 2007 1 次提交
    • J
      xen: Core Xen implementation · 5ead97c8
      Jeremy Fitzhardinge 提交于
      This patch is a rollup of all the core pieces of the Xen
      implementation, including:
       - booting and setup
       - pagetable setup
       - privileged instructions
       - segmentation
       - interrupt flags
       - upcalls
       - multicall batching
      
      BOOTING AND SETUP
      
      The vmlinux image is decorated with ELF notes which tell the Xen
      domain builder what the kernel's requirements are; the domain builder
      then constructs the address space accordingly and starts the kernel.
      
      Xen has its own entrypoint for the kernel (contained in an ELF note).
      The ELF notes are set up by xen-head.S, which is included into head.S.
      In principle it could be linked separately, but it seems to provoke
      lots of binutils bugs.
      
      Because the domain builder starts the kernel in a fairly sane state
      (32-bit protected mode, paging enabled, flat segments set up), there's
      not a lot of setup needed before starting the kernel proper.  The main
      steps are:
        1. Install the Xen paravirt_ops, which is simply a matter of a
           structure assignment.
        2. Set init_mm to use the Xen-supplied pagetables (analogous to the
           head.S generated pagetables in a native boot).
        3. Reserve address space for Xen, since it takes a chunk at the top
           of the address space for its own use.
        4. Call start_kernel()
      
      PAGETABLE SETUP
      
      Once we hit the main kernel boot sequence, it will end up calling back
      via paravirt_ops to set up various pieces of Xen specific state.  One
      of the critical things which requires a bit of extra care is the
      construction of the initial init_mm pagetable.  Because Xen places
      tight constraints on pagetables (an active pagetable must always be
      valid, and must always be mapped read-only to the guest domain), we
      need to be careful when constructing the new pagetable to keep these
      constraints in mind.  It turns out that the easiest way to do this is
      use the initial Xen-provided pagetable as a template, and then just
      insert new mappings for memory where a mapping doesn't already exist.
      
      This means that during pagetable setup, it uses a special version of
      xen_set_pte which ignores any attempt to remap a read-only page as
      read-write (since Xen will map its own initial pagetable as RO), but
      lets other changes to the ptes happen, so that things like NX are set
      properly.
      
      PRIVILEGED INSTRUCTIONS AND SEGMENTATION
      
      When the kernel runs under Xen, it runs in ring 1 rather than ring 0.
      This means that it is more privileged than user-mode in ring 3, but it
      still can't run privileged instructions directly.  Non-performance
      critical instructions are dealt with by taking a privilege exception
      and trapping into the hypervisor and emulating the instruction, but
      more performance-critical instructions have their own specific
      paravirt_ops.  In many cases we can avoid having to do any hypercalls
      for these instructions, or the Xen implementation is quite different
      from the normal native version.
      
      The privileged instructions fall into the broad classes of:
        Segmentation: setting up the GDT and the GDT entries, LDT,
           TLS and so on.  Xen doesn't allow the GDT to be directly
           modified; all GDT updates are done via hypercalls where the new
           entries can be validated.  This is important because Xen uses
           segment limits to prevent the guest kernel from damaging the
           hypervisor itself.
        Traps and exceptions: Xen uses a special format for trap entrypoints,
           so when the kernel wants to set an IDT entry, it needs to be
           converted to the form Xen expects.  Xen sets int 0x80 up specially
           so that the trap goes straight from userspace into the guest kernel
           without going via the hypervisor.  sysenter isn't supported.
        Kernel stack: The esp0 entry is extracted from the tss and provided to
           Xen.
        TLB operations: the various TLB calls are mapped into corresponding
           Xen hypercalls.
        Control registers: all the control registers are privileged.  The most
           important is cr3, which points to the base of the current pagetable,
           and we handle it specially.
      
      Another instruction we treat specially is CPUID, even though its not
      privileged.  We want to control what CPU features are visible to the
      rest of the kernel, and so CPUID ends up going into a paravirt_op.
      Xen implements this mainly to disable the ACPI and APIC subsystems.
      
      INTERRUPT FLAGS
      
      Xen maintains its own separate flag for masking events, which is
      contained within the per-cpu vcpu_info structure.  Because the guest
      kernel runs in ring 1 and not 0, the IF flag in EFLAGS is completely
      ignored (and must be, because even if a guest domain disables
      interrupts for itself, it can't disable them overall).
      
      (A note on terminology: "events" and interrupts are effectively
      synonymous.  However, rather than using an "enable flag", Xen uses a
      "mask flag", which blocks event delivery when it is non-zero.)
      
      There are paravirt_ops for each of cli/sti/save_fl/restore_fl, which
      are implemented to manage the Xen event mask state.  The only thing
      worth noting is that when events are unmasked, we need to explicitly
      see if there's a pending event and call into the hypervisor to make
      sure it gets delivered.
      
      UPCALLS
      
      Xen needs a couple of upcall (or callback) functions to be implemented
      by each guest.  One is the event upcalls, which is how events
      (interrupts, effectively) are delivered to the guests.  The other is
      the failsafe callback, which is used to report errors in either
      reloading a segment register, or caused by iret.  These are
      implemented in i386/kernel/entry.S so they can jump into the normal
      iret_exc path when necessary.
      
      MULTICALL BATCHING
      
      Xen provides a multicall mechanism, which allows multiple hypercalls
      to be issued at once in order to mitigate the cost of trapping into
      the hypervisor.  This is particularly useful for context switches,
      since the 4-5 hypercalls they would normally need (reload cr3, update
      TLS, maybe update LDT) can be reduced to one.  This patch implements a
      generic batching mechanism for hypercalls, which gets used in many
      places in the Xen code.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Cc: Ian Pratt <ian.pratt@xensource.com>
      Cc: Christian Limpach <Christian.Limpach@cl.cam.ac.uk>
      Cc: Adrian Bunk <bunk@stusta.de>
      5ead97c8
  12. 19 5月, 2007 2 次提交
  13. 11 5月, 2007 1 次提交
    • E
      Revert "[PATCH] paravirt: Add startup infrastructure for paravirtualization" · 5a18c92a
      Eric W. Biederman 提交于
      This reverts commit c9ccf30d.
      
      Entering the kernel at startup_32 without passing our real mode data in
      %esi, and without guaranteeing that physical and virtual addresses are
      identity mapped makes head.S impossible to maintain.
      
      The only user of this infrastructure is lguest which is not merged so
      nothing we currently support will break by removing this over designed
      nightmare, and only the pending lguest patches will be affected.  The
      pending Xen patches have a different entry point that they use.
      
      We are currently discussing what Xen and lguest need to do to boot the
      kernel in a more normal fashion so using startup_32 in this weird manner is
      clearly not their long term direction.
      
      So let's remove this code in head.S before it causes brain damage to people
      trying to maintain head.S
      
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Zachary Amsden <zach@vmware.com>
      CC: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5a18c92a
  14. 03 5月, 2007 6 次提交
  15. 16 4月, 2007 1 次提交
    • A
      [PATCH] x86: Fix gcc 4.2 _proxy_pda workaround · 08269c6d
      Andi Kleen 提交于
      Due to an over aggressive optimizer gcc 4.2 cannot optimize away _proxy_pda
      in all cases (counter intuitive, but true).  This breaks loading of some
      modules.
      
      The earlier workaround to just export a dummy symbol didn't work unfortunately
      because the module code ignores exports with 0 value.
      
      Make it 1 instead.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      08269c6d
  16. 13 2月, 2007 1 次提交
    • V
      [PATCH] i386: move startup_32() in text.head section · f8657e1b
      Vivek Goyal 提交于
      o Entry startup_32 was in .text section but it was accessing some init
        data too and it prompts MODPOST to generate compilation warnings.
      
      WARNING: vmlinux - Section mismatch: reference to .init.data:boot_params from
      .text between '_text' (at offset 0xc0100029) and 'startup_32_smp'
      WARNING: vmlinux - Section mismatch: reference to .init.data:boot_params from
      .text between '_text' (at offset 0xc0100037) and 'startup_32_smp'
      WARNING: vmlinux - Section mismatch: reference to
      .init.data:init_pg_tables_end from .text between '_text' (at offset
      0xc0100099) and 'startup_32_smp'
      
      o Can't move startup_32 to .init.text as this entry point has to be at the
        start of bzImage. Hence moved startup_32 to a new section .text.head and
        instructed MODPOST to not to generate warnings if init data is being
        accessed from .text.head section. This code has been audited.
      
      o SMP boot up code (startup_32_smp) can go into .init.text if CPU hotplug
        is not supported. Otherwise it generates more warnings
      
      WARNING: vmlinux - Section mismatch: reference to .init.data:new_cpu_data from
      .text between 'checkCPUtype' (at offset 0xc0100126) and 'is486'
      WARNING: vmlinux - Section mismatch: reference to .init.data:new_cpu_data from
      .text between 'checkCPUtype' (at offset 0xc0100130) and 'is486'
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      f8657e1b