1. 15 5月, 2008 1 次提交
    • B
      [IA64] Don't reserve crashkernel memory > 4 GB · 8a3360f0
      Bernhard Walle 提交于
      Some IA64 machines map all cell-local memory above 4 GB (32 bit limit).
      However, in most cases, the kernel needs some memory below that limit that is
      DMA-capable. So in this machine configuration, the crashkernel will be reserved
      above 4 GB.
      
      For machines that use SWIOTLB implementation because they lack an I/O MMU
      the low memory is required by the SWIOTLB implementation. In that case,
      it doesn't make sense to reserve the crashkernel at all because it's unusable
      for kdump.
      
      A special case is the "hpzx1" machine vector. In theory, it has a I/O MMU, so
      it can be booted above 4 GB. However, in the kdump case that is not possible
      because of changeset 51b58e3e:
      
          On HP zx1 machines, the 'machvec=dig' parameter is needed for the kdump
          kernel to avoid problems with the HP sba iommu.  The problem is that during
          the boot of the kdump kernel, the iommu is re-initialized, so in-flight DMA
          from improperly shutdown drivers causes an IOTLB miss which leads to an
          MCA.  With kdump, the idea is to get into the kdump kernel with as little
          code as we can, so shutting down drivers properly is not an option.
      
          The workaround is to add 'machvec=dig' to the kdump kernel boot parameters.
          This makes the kdump kernel avoid using the sba iommu altogether, leaving
          the IOTLB intact.  Any ongoing DMA falls harmlessly outside the kdump
          kernel.  After the kdump kernel reboots, all devices will have been
          shutdown properly and DMA stopped.
      
      This patch pushes that functionality into the sba iommu initialization
      code, so that users won't have to find the obscure documentation telling
      them about 'machvec=dig'.
      
      This means that also for hpzx1 it's not possible to boot when all
      memory is above the 4 GB limit. So the only machine vectors that can handle
      this case are "sn2" and "uv".
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      8a3360f0
  2. 12 4月, 2008 1 次提交
    • Z
      [IA64] Fix NUMA configuration issue · 98075d24
      Zoltan Menyhart 提交于
      There is a NUMA memory configuration issue in 2.6.24:
      
      A 2-node machine of ours has got the following memory layout:
      
      Node 0:	0 - 2 Gbytes
      Node 0:	4 - 8 Gbytes
      Node 1:	8 - 16 Gbytes
      Node 0:	16 - 18 Gbytes
      
      "efi_memmap_init()" merges the three last ranges into one.
      
      "register_active_ranges()" is called as follows:
      
      efi_memmap_walk(register_active_ranges, NULL);
      
      i.e. once for the 4 - 18 Gbytes range. It picks up the node
      number from the start address, and registers all the memory for
      the node #0.
      
      "register_active_ranges()" should be called as follows to
      make sure there is no merged address range at its entry:
      
      efi_memmap_walk(filter_memory, register_active_ranges);
      
      "filter_memory()" is similar to "filter_rsvd_memory()",
      but the reserved memory ranges are not filtered out.
      Signed-off-by: NZoltan Menyhart <Zoltan.Menyhart@bull.net>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      98075d24
  3. 09 4月, 2008 1 次提交
    • H
      [IA64] Minimize per_cpu reservations. · 2c6e6db4
      holt@sgi.com 提交于
      This attached patch significantly shrinks boot memory allocation on ia64.
      It does this by not allocating per_cpu areas for cpus that can never
      exist.
      
      In the case where acpi does not have any numa node description of the
      cpus, I defaulted to assigning the first 32 round-robin on the known
      nodes..  For the !CONFIG_ACPI  I used for_each_possible_cpu().
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      2c6e6db4
  4. 05 4月, 2008 2 次提交
  5. 07 3月, 2008 1 次提交
  6. 05 2月, 2008 1 次提交
  7. 26 1月, 2008 1 次提交
  8. 08 12月, 2007 1 次提交
  9. 30 10月, 2007 1 次提交
    • A
      [IA64] /proc/cpuinfo "physical id" field cleanups · 113134fc
      Alex Chiang 提交于
      Clean up the process for presenting the "physical id" field in
      /proc/cpuinfo.
      
      	- remove global smp_num_cpucores, as it is mostly useless
      
      	- remove check_for_logical_procs(), since we do the same
      	  functionality in identify_siblings()
      
      	- reflow logic in identify_siblings(). If an older CPU
      	  does not implement PAL_LOGICAL_TO_PHYSICAL, we may still
      	  be able to get useful information from SAL_PHYSICAL_ID_INFO
      
      	- in identify_siblings(), threads/cores are a property of
      	  the CPU, not the platform
      
      	- remove useless printk's about multi-core / thread
      	  capability in identify_siblings(), as that information
      	  is readily available in /proc/cpuinfo, and printing for
      	  the BSP only adds little value
      
      	- smp_num_siblings is now meaningful if any CPU in the
      	  system supports threads, not just the BSP
      
      	- expose "physical id" field, even on CPUs that are not
      	  multi-core / multi-threaded (as long as we have a valid
      	  value). Now we know what sockets Madisons live in too.
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      113134fc
  10. 22 10月, 2007 1 次提交
    • B
      kexec: add BSS to resource tree · 00bf4098
      Bernhard Walle 提交于
      Add the BSS to the resource tree just as kernel text and kernel data are in
      the resource tree.  The main reason behind this is to avoid crashkernel
      reservation in that area.
      
      While it's not strictly necessary to have the BSS in the resource tree (the
      actual collision detection is done in the reserve_bootmem() function before),
      the usage of the BSS resource should be presented to the user in /proc/iomem
      just as Kernel data and Kernel code.
      
      Note: The patch currently is only implemented for x86 and ia64 (because
      efi_initialize_iomem_resources() has the same signature on i386 and ia64).
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00bf4098
  11. 20 10月, 2007 1 次提交
  12. 17 10月, 2007 2 次提交
  13. 01 9月, 2007 1 次提交
  14. 29 8月, 2007 1 次提交
  15. 18 8月, 2007 1 次提交
  16. 31 7月, 2007 1 次提交
  17. 26 7月, 2007 1 次提交
  18. 20 7月, 2007 1 次提交
  19. 17 7月, 2007 1 次提交
    • Y
      serial: convert early_uart to earlycon for 8250 · 18a8bd94
      Yinghai Lu 提交于
      Beacuse SERIAL_PORT_DFNS is removed from include/asm-i386/serial.h and
      include/asm-x86_64/serial.h.  the serial8250_ports need to be probed late in
      serial initializing stage.  the console_init=>serial8250_console_init=>
      register_console=>serial8250_console_setup will return -ENDEV, and console
      ttyS0 can not be enabled at that time.  need to wait till uart_add_one_port in
      drivers/serial/serial_core.c to call register_console to get console ttyS0.
      that is too late.
      
      Make early_uart to use early_param, so uart console can be used earlier.  Make
      it to be bootconsole with CON_BOOT flag, so can use console handover feature.
      and it will switch to corresponding normal serial console automatically.
      
      new command line will be:
      	console=uart8250,io,0x3f8,9600n8
      	console=uart8250,mmio,0xff5e0000,115200n8
      or
      	earlycon=uart8250,io,0x3f8,9600n8
      	earlycon=uart8250,mmio,0xff5e0000,115200n8
      
      it will print in very early stage:
      	Early serial console at I/O port 0x3f8 (options '9600n8')
      	console [uart0] enabled
      later for console it will print:
      	console handover: boot [uart0] -> real [ttyS0]
      
      Signed-off-by: <yinghai.lu@sun.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Gerd Hoffmann <kraxel@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      18a8bd94
  20. 10 7月, 2007 1 次提交
    • I
      sched: zap the migration init / cache-hot balancing code · 0437e109
      Ingo Molnar 提交于
      the SMP load-balancer uses the boot-time migration-cost estimation
      code to attempt to improve the quality of balancing. The reason for
      this code is that the discrete priority queues do not preserve
      the order of scheduling accurately, so the load-balancer skips
      tasks that were running on a CPU 'recently'.
      
      this code is fundamental fragile: the boot-time migration cost detector
      doesnt really work on systems that had large L3 caches, it caused boot
      delays on large systems and the whole cache-hot concept made the
      balancing code pretty undeterministic as well.
      
      (and hey, i wrote most of it, so i can say it out loud that it sucks ;-)
      
      under CFS the same purpose of cache affinity can be achieved without
      any special cache-hot special-case: tasks are sorted in the 'timeline'
      tree and the SMP balancer picks tasks from the left side of the
      tree, thus the most cache-cold task is balanced automatically.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0437e109
  21. 12 5月, 2007 1 次提交
  22. 08 5月, 2007 1 次提交
  23. 07 4月, 2007 1 次提交
  24. 21 3月, 2007 1 次提交
  25. 08 3月, 2007 1 次提交
  26. 07 3月, 2007 1 次提交
    • M
      [IA64] kexec: Use EFI_LOADER_DATA for ELF core header · cee87af2
      Magnus Damm 提交于
      The address where the ELF core header is stored is passed to the secondary
      kernel as a kernel command line option.  The memory area for this header is
      also marked as a separate EFI memory descriptor on ia64.
      
      The separate EFI memory descriptor is at the moment of the type
      EFI_UNUSABLE_MEMORY.  With such a type the secondary kernel skips over the
      entire memory granule (config option, 16M or 64M) when detecting memory.
      If we are lucky we will just lose some memory, but if we happen to have
      data in the same granule (such as an initramfs image), then this data will
      never get mapped and the kernel bombs out when trying to access it.
      
      So this is an attempt to fix this by changing the EFI memory descriptor
      type into EFI_LOADER_DATA.  This type is the same type used for the kernel
      data and for initramfs.  In the secondary kernel we then handle the ELF
      core header data the same way as we handle the initramfs image.
      
      This patch contains the kernel changes to make this happen.  Pretty
      straightforward, we reserve the area in reserve_memory().  The address for
      the area comes from the kernel command line and the size comes from the
      specialized EFI parsing function vmcore_find_descriptor_size().
      
      The kexec-tools-testing code for this can be found here:
      http://lists.osdl.org/pipermail/fastboot/2007-February/005983.htmlSigned-off-by: NMagnus Damm <magnus@valinux.co.jp>
      Cc: Simon Horman <horms@verge.net.au>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      cee87af2
  27. 13 2月, 2007 1 次提交
  28. 07 2月, 2007 1 次提交
    • C
      [IA64] remove per-cpu ia64_phys_stacked_size_p8 · a0776ec8
      Chen, Kenneth W 提交于
      It's not efficient to use a per-cpu variable just to store
      how many physical stack register a cpu has.  Ever since the
      incarnation of ia64 up till upcoming Montecito processor, that
      variable has "glued" to 96. Having a variable in memory means
      that the kernel is burning an extra cacheline access on every
      syscall and kernel exit path.  Such "static" value is better
      served with the instruction patching utility exists today.
      Convert ia64_phys_stacked_size_p8 into dynamic insn patching.
      
      This also has a pleasant side effect of eliminating access to
      per-cpu area while psr.ic=0 in the kernel exit path. (fixable
      for per-cpu DTC work, but why bother?)
      
      There are some concerns with the default value that the instruc-
      tion encoded in the kernel image.  It shouldn't be concerned.
      The reasons are:
      
      (1) cpu_init() is called at CPU initialization.  In there, we
          find out physical stack register size from PAL and patch
          two instructions in kernel exit code.  The code in question
          can not be executed before the patching is done.
      
      (2) current implementation stores zero in ia64_phys_stacked_size_p8,
          and that's what the current kernel exit path loads the value with.
          With the new code, it is equivalent that we store reg size 96
          in ia64_phys_stacked_size_p8, thus creating a better safety net.
          Given (1) above can never fail, having (2) is just a bonus.
      
      All in all, this patch allow one less memory reference in the kernel
      exit path, thus reducing syscall and interrupt return latency; and
      avoid polluting potential useful data in the CPU cache.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      a0776ec8
  29. 06 2月, 2007 1 次提交
    • A
      [IA64] use snprintf() on features field of /proc/cpuinfo · ae0af3e3
      Aron Griffis 提交于
      Some patches have turned up on xen-devel recently to convert strcpy()
      to safer alternatives and so forth.  While reviewing those patches
      I noticed that the features string building could be cleaned up.
      
      This patch uses snprintf() instead of strcpy() and direct character
      pointer manipulation.  It makes the features string building safe and
      gets rid of the special case for features output in show_cpuinfo()
      
      Additionally I removed the (int) cast of ARRAY_SIZE, which seems to
      serve no purpose.
      Signed-off-by: NAron Griffis <aron@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      ae0af3e3
  30. 13 12月, 2006 3 次提交
    • T
      [IA64] Take defensive stance on ia64_pal_get_brand_info() · 75f6a1de
      Tony Luck 提交于
      Stephane thought he saw a problem here (but was just confused
      by the return value from ia64_pal_get_brand_info()).  But we
      should be more defensive here in case an prototype PAL for
      a future processor doesn't implement this PAL call.
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      75f6a1de
    • H
      [IA64] Kexec/Kdump: honour non-zero crashkernel offset. · ad1c3ba7
      Horms 提交于
      There seems to be a value in both allowing the kernel to determine
      the base offset of the crashkernel automatically and allowing
      users's to sepcify it.
      
      The old behaviour on ia64, which is still the current behaviour on
      most architectures is for the user to always specify the address.
      Recently ia64 was changed so that it is always automatically determined.
      
      With this patch the kernel automatically determines the offset if
      the supplied value is 0, otherwise it uses the value provided.
      
      This should probably be backed by a documentation change.
      Signed-Off-By: NSimon Horman <horms@verge.net.au>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      ad1c3ba7
    • H
      [IA64] CONFIG_KEXEC/CONFIG_CRASH_DUMP permutations · 45a98fc6
      Horms 提交于
      Actually, on reflection I think that there is a good case for
      keeping the options separate. I am thinking particularly of people
      who want a very small crashdump kernel and thus don't want to compile
      in kexec.
      
      The patch below should fix things up so that all valid combinations of
      KEXEC, CRASH_DUMP and VMCORE compile cleanly - VMCORE depends on
      CRASH_DUMP which is why I said valid combinations. In a nutshell
      it just untangles unrelated code and switches around a few defines.
      
      Please note that it creats a new file, arch/ia64/kernel/crash_dump.c
      This is in keeping with the i386 implementation.
      Signed-off-by: NSimon Horman <horms@verge.net.au>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      45a98fc6
  31. 08 12月, 2006 1 次提交
    • Z
      [IA64] IA64 Kexec/kdump · a7956113
      Zou Nan hai 提交于
      Changes and updates.
      
      1. Remove fake rendz path and related code according to discuss with Khalid Aziz.
      2. fc.i offset fix in relocate_kernel.S.
      3. iospic shutdown code eoi and mask race fix from Fujitsu.
      4. Warm boot hook in machine_kexec to SN SAL code from Jack Steiner.
      5. Send slave to SAL slave loop patch from Jay Lan.
      6. Kdump on non-recoverable MCA event patch from Jay Lan
      7. Use CTL_UNNUMBERED in kdump_on_init sysctl.
      Signed-off-by: NZou Nan hai <nanhai.zou@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      a7956113
  32. 01 11月, 2006 1 次提交
    • T
      [IA64] move SAL_CACHE_FLUSH check later in boot · fa1d19e5
      Troy Heber 提交于
      The check to see if the firmware drops interrupts during a
      SAL_CACHE_FLUSH is done to early in the boot. SAL_CACHE_FLUSH expects
      to be able to make PAL calls in virtual mode, on some cell based
      machines a fault occurs causing a MCA. This patch moves the check
      after mmu_context_init so the TLB and VHPT are properly setup.
      
      Signed-off-by Troy Heber <troy.heber@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      fa1d19e5
  33. 02 10月, 2006 1 次提交
  34. 11 7月, 2006 1 次提交
  35. 01 7月, 2006 1 次提交
  36. 22 6月, 2006 1 次提交