1. 06 5月, 2009 1 次提交
  2. 18 4月, 2009 1 次提交
  3. 04 4月, 2009 1 次提交
    • S
      x86, ACPI: add support for x2apic ACPI extensions · 7237d3de
      Suresh Siddha 提交于
      All logical processors with APIC ID values of 255 and greater will have their
      APIC reported through Processor X2APIC structure (type-9 entry type) and all
      logical processors with APIC ID less than 255 will have their APIC reported
      through legacy Processor Local APIC (type-0 entry type) only. This is the
      same case even for NMI structure reporting.
          
      The Processor X2APIC Affinity structure provides the association between the
      X2APIC ID of a logical processor and the proximity domain to which the logical
      processor belongs.
          
      For OSPM, Procssor IDs outside the 0-254 range are to be declared as Device()
      objects in the ACPI namespace.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      7237d3de
  4. 18 2月, 2009 1 次提交
  5. 21 1月, 2009 1 次提交
    • I
      x86: uv cleanup, build fix · 4ec71fa2
      Ingo Molnar 提交于
      Fix:
      
       arch/x86/mm/srat_64.c: In function ‘acpi_numa_processor_affinity_init’:
       arch/x86/mm/srat_64.c:141: error: implicit declaration of function ‘get_uv_system_type’
       arch/x86/mm/srat_64.c:141: error: ‘UV_X2APIC’ undeclared (first use in this function)
       arch/x86/mm/srat_64.c:141: error: (Each undeclared identifier is reported only once
       arch/x86/mm/srat_64.c:141: error: for each function it appears in.)
      
      A couple of UV definitions were moved to asm/uv/uv.h, but srat_64.c did
      not include that header. Add it.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4ec71fa2
  6. 17 12月, 2008 1 次提交
  7. 13 10月, 2008 1 次提交
  8. 11 7月, 2008 1 次提交
  9. 08 7月, 2008 2 次提交
    • Y
      x86: remove end_pfn in 64bit · c987d12f
      Yinghai Lu 提交于
      and use max_pfn directly.
      Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c987d12f
    • M
      x86: cleanup early per cpu variables/accesses v4 · 23ca4bba
      Mike Travis 提交于
        * Introduce a new PER_CPU macro called "EARLY_PER_CPU".  This is
          used by some per_cpu variables that are initialized and accessed
          before there are per_cpu areas allocated.
      
          ["Early" in respect to per_cpu variables is "earlier than the per_cpu
          areas have been setup".]
      
          This patchset adds these new macros:
      
      	DEFINE_EARLY_PER_CPU(_type, _name, _initvalue)
      	EXPORT_EARLY_PER_CPU_SYMBOL(_name)
      	DECLARE_EARLY_PER_CPU(_type, _name)
      
      	early_per_cpu_ptr(_name)
      	early_per_cpu_map(_name, _idx)
      	early_per_cpu(_name, _cpu)
      
          The DEFINE macro defines the per_cpu variable as well as the early
          map and pointer.  It also initializes the per_cpu variable and map
          elements to "_initvalue".  The early_* macros provide access to
          the initial map (usually setup during system init) and the early
          pointer.  This pointer is initialized to point to the early map
          but is then NULL'ed when the actual per_cpu areas are setup.  After
          that the per_cpu variable is the correct access to the variable.
      
          The early_per_cpu() macro is not very efficient but does show how to
          access the variable if you have a function that can be called both
          "early" and "late".  It tests the early ptr to be NULL, and if not
          then it's still valid.  Otherwise, the per_cpu variable is used
          instead:
      
      	#define early_per_cpu(_name, _cpu) 			\
      		(early_per_cpu_ptr(_name) ?			\
      			early_per_cpu_ptr(_name)[_cpu] :	\
      			per_cpu(_name, _cpu))
      
          A better method is to actually check the pointer manually.  In the
          case below, numa_set_node can be called both "early" and "late":
      
      	void __cpuinit numa_set_node(int cpu, int node)
      	{
      	    int *cpu_to_node_map = early_per_cpu_ptr(x86_cpu_to_node_map);
      
      	    if (cpu_to_node_map)
      		    cpu_to_node_map[cpu] = node;
      	    else
      		    per_cpu(x86_cpu_to_node_map, cpu) = node;
      	}
      
        * Add a flag "arch_provides_topology_pointers" that indicates pointers
          to topology cpumask_t maps are available.  Otherwise, use the function
          returning the cpumask_t value.  This is useful if cpumask_t set size
          is very large to avoid copying data on to/off of the stack.
      
        * The coverage of CONFIG_DEBUG_PER_CPU_MAPS has been increased while
          the non-debug case has been optimized a bit.
      
        * Remove an unreferenced compiler warning in drivers/base/topology.c
      
        * Clean up #ifdef in setup.c
      
      For inclusion into sched-devel/latest tree.
      
      Based on:
      	git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
          +   sched-devel/latest  .../mingo/linux-2.6-sched-devel.git
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      23ca4bba
  10. 12 6月, 2008 1 次提交
  11. 25 5月, 2008 1 次提交
  12. 25 4月, 2008 1 次提交
  13. 20 4月, 2008 1 次提交
  14. 17 4月, 2008 2 次提交
  15. 19 2月, 2008 1 次提交
  16. 08 2月, 2008 1 次提交
    • B
      Introduce flags for reserve_bootmem() · 72a7fe39
      Bernhard Walle 提交于
      This patchset adds a flags variable to reserve_bootmem() and uses the
      BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
      between crashkernel area and already used memory.
      
      This patch:
      
      Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
      If that flag is set, the function returns with -EBUSY if the memory already
      has been reserved in the past.  This is to avoid conflicts.
      
      Because that code runs before SMP initialisation, there's no race condition
      inside reserve_bootmem_core().
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix powerpc build]
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72a7fe39
  17. 30 1月, 2008 10 次提交
  18. 20 10月, 2007 1 次提交
  19. 18 10月, 2007 1 次提交
    • M
      x86: fix cpu_to_node references · 98c9e27a
      Mike Travis 提交于
      In x86_64 and i386 architectures most arrays that are sized using
      NR_CPUS lay in local memory on node 0.  Not only will most (99%?) of the
      systems not use all the slots in these arrays, particularly when NR_CPUS
      is increased to accommodate future very high cpu count systems, but a
      number of cache lines are passed unnecessarily on the system bus when
      these arrays are referenced by cpus on other nodes.
      
      Typically, the values in these arrays are referenced by the cpu
      accessing it's own values, though when passing IPI interrupts, the cpu
      does access the data relevant to the targeted cpu/node.  Of course, if
      the referencing cpu is not on node 0, then the reference will still
      require cross node exchanges of cache lines.  A common use of this is
      for an interrupt service routine to pass the interrupt to other cpus
      local to that node.
      
      Ideally, all the elements in these arrays should be moved to the per_cpu
      data area.  In some cases (such as x86_cpu_to_apicid) the array is
      referenced before the per_cpu data areas are setup.  In this case, a
      static array is declared in the __initdata area and initialized by the
      booting cpu (BSP).  The values are then moved to the per_cpu area after
      it is initialized and the original static array is freed with the rest
      of the __initdata.
      
      This patch:
      
      Fix four instances where cpu_to_node is referenced by array instead of
      via the cpu_to_node macro.  This is preparation to moving it to the
      per_cpu data area.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      98c9e27a
  20. 11 10月, 2007 2 次提交
  21. 22 7月, 2007 4 次提交
  22. 03 5月, 2007 1 次提交
  23. 03 2月, 2007 1 次提交
  24. 22 10月, 2006 1 次提交
    • K
      [PATCH] x86-64: x86_64 hot-add memory srat.c fix · 926fafeb
      keith mannthey 提交于
        This patch corrects the logic used in srat.c to figure out what
      parsing what action to take when registering hot-add areas.  Hot-add
      areas should only be added to the node information for the
      MEMORY_HOTPLUG_RESERVE case.  When booting MEMORY_HOTPLUG_SPARSE hot-add
      areas on everything but the last node are getting include in the node
      data and during kernel boot the pages are setup then the kernel dies
      when the pages are used. This patch fixes this issue.
      Signed-off-by: NKeith Mannthey <kmannth@us.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      926fafeb
  25. 01 10月, 2006 1 次提交