1. 06 8月, 2008 1 次提交
  2. 13 8月, 2008 2 次提交
  3. 06 8月, 2008 6 次提交
  4. 05 8月, 2008 3 次提交
  5. 13 8月, 2008 1 次提交
  6. 05 8月, 2008 4 次提交
  7. 13 8月, 2008 3 次提交
    • Y
      h8300: fix section mismatches · 9de15e91
      Yoshinori Sato 提交于
      WARNING: vmlinux.o(.text+0x2fdf): Section mismatch in reference from the variable .LM3 to the variable .init.text:___alloc_bootmem
      The function .LM3() references
      the variable __init ___alloc_bootmem.
      This is often because .LM3 lacks a __init
      annotation or the annotation of ___alloc_bootmem is wrong.
      
      WARNING: vmlinux.o(.text+0x2ff5): Section mismatch in reference from the variable .LM4 to the variable .init.text:___alloc_bootmem
      The function .LM4() references
      the variable __init ___alloc_bootmem.
      This is often because .LM4 lacks a __init
      annotation or the annotation of ___alloc_bootmem is wrong.
      
      WARNING: vmlinux.o(.text+0x300b): Section mismatch in reference from the variable .LM5 to the variable .init.text:___alloc_bootmem
      The function .LM5() references
      the variable __init ___alloc_bootmem.
      This is often because .LM5 lacks a __init
      annotation or the annotation of ___alloc_bootmem is wrong.
      
      WARNING: vmlinux.o(.text+0x304b): Section mismatch in reference from the variable .LM10 to the variable .init.text:_free_area_init
      The function .LM10() references
      the variable __init _free_area_init.
      This is often because .LM10 lacks a __init
      annotation or the annotation of _free_area_init is wrong.
      
      WARNING: vmlinux.o(.text+0x30a3): Section mismatch in reference from the variable .LM17 to the variable .init.text:_free_all_bootmem
      The function .LM17() references
      the variable __init _free_all_bootmem.
      This is often because .LM17 lacks a __init
      annotation or the annotation of _free_all_bootmem is wrong.
      Signed-off-by: NYoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9de15e91
    • A
      [IA64] use bcd2bin/bin2bcd · 430ac5ba
      Adrian Bunk 提交于
      This patch changes ia64 to use the new bcd2bin/bin2bcd functions instead
      of the obsolete BCD2BIN/BIN2BCD macros.
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      430ac5ba
    • T
      [IA64] Ensure cpu0 can access per-cpu variables in early boot code · 10617bbe
      Tony Luck 提交于
      ia64 handles per-cpu variables a litle differently from other architectures
      in that it maps the physical memory allocated for each cpu at a constant
      virtual address (0xffffffffffff0000). This mapping is not enabled until
      the architecture specific cpu_init() function is run, which causes problems
      since some generic code is run before this point. In particular when
      CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to
      per-cpu memory at the first printk() call so the boot will fail without
      the kernel printing anything to the console.
      
      Fix this by allocating percpu memory for cpu0 in the kernel data section
      and doing all initialization to enable percpu access in head.S before
      calling any generic code.
      
      Other cpus must take care not to access per-cpu variables too early, but
      their code path from start_secondary() to cpu_init() is all in arch/ia64
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      10617bbe
  8. 12 8月, 2008 8 次提交
  9. 11 8月, 2008 5 次提交
    • E
      x86: Restore proper vector locking during cpu hotplug · d388e5fd
      Eric W. Biederman 提交于
      Having cpu_online_map change during assign_irq_vector can result
      in some really nasty and weird things happening.  The one that
      bit me last time was accessing non existent per cpu memory for non
      existent cpus.
      
      This locking was removed in a sloppy x86_64 and x86_32 merge patch.
      
      Guys can we please try and avoid subtly breaking x86 when we are
      merging files together?
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d388e5fd
    • J
      powerpc: Do not ignore arch/powerpc/include · 0afd2ac9
      Junio C Hamano 提交于
      Back when .gitignore file was added to arch/powerpc/ in 06f2138e ([POWERPC]
      Add files build to .gitignore, 2006-11-26), there indeed was nothing
      tracked in the ignored hierarchy and ignoring everything made sense.  But
      we have very many tracked files there these days, and having a higher
      level .gitignore that ignores everything is asking for future troubles..
      
      This should have been part of b8b572e1 (powerpc: Move include files to
      arch/powerpc/include/asm, 2008-08-01).
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      Acked-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      0afd2ac9
    • B
      powerpc/mm: Fix attribute confusion with htab_bolt_mapping() · bc033b63
      Benjamin Herrenschmidt 提交于
      The function htab_bolt_mapping() is used to create permanent
      mappings in the MMU hash table, for example, in order to create
      the linear mapping of vmemmap.  It's also used by early boot
      ioremap (before mem_init_done).
      
      However, the way ioremap uses it is incorrect as it passes it the
      protection flags in the "linux PTE" form while htab_bolt_mapping()
      expects them in the hash table format.  This is made more confusing by
      the fact that some of those flags are actually in the same position in
      both cases.
      
      This fixes it all by making htab_bolt_mapping() take normal linux
      protection flags instead, and use a little helper to convert them to
      htab flags. Callers can now use the usual PAGE_* definitions safely.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/include/asm/mmu-hash64.h |    2 -
       arch/powerpc/mm/hash_utils_64.c       |   65 ++++++++++++++++++++--------------
       arch/powerpc/mm/init_64.c             |    9 +---
       3 files changed, 44 insertions(+), 32 deletions(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      bc033b63
    • B
      powerpc/pci: Don't keep ISA memory hole resources in the tree · 8db13a0e
      Benjamin Herrenschmidt 提交于
      When we have an ISA memory hole (ie, a PCI window that allows us to
      generate PCI memory cycles at low PCI address) mixed with other
      resources using a different CPU <=> PCI mapping, we must not keep
      the ISA hole in the bridge resource list.
      
      If we do, things might start trying to allocate device resources
      in there and will get the PCI addresses wrong.
      
      This fixes it by arranging to remove the ISA memory hole resource in
      this case.  This fixes various cases of PCMCIA breakage on PowerBooks
      using the MPC106 "grackle" bridge.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      8db13a0e
    • N
      powerpc: Zero fill the return values of rtas argument buffer · b79998fc
      Nathan Fontenot 提交于
      The kernel copy of the rtas args struct contains the return
      value(s) for the specified rtas call.  These are copied back
      to user space with the assumption that every value has been
      set by the rtas call, which turns out to be not always true.
      Thus userspace can see random values and think the call failed
      when in fact it succeeded, but for some reason didn't set one
      of the return values.
      
      This fixes the problem by zeroing out the return value fields
      of the rtas args struct before processing the rtas call.
      Signed-off-by: NNathan Fontenot <nfont@austin.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      b79998fc
  10. 09 8月, 2008 6 次提交
  11. 08 8月, 2008 1 次提交