- 07 12月, 2005 1 次提交
-
-
由 Jack Steiner 提交于
The per-node data structures are allocated with strided offsets that are a function of the node number. This prevents excessive cache-aliasing from occurring. On systems with a large number of nodes, the strided offset becomes too large. This patch restricts the maximum offset to 32MB. This is far larger than the size of any current L3 cache. Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 09 11月, 2005 1 次提交
-
-
由 Bob Picco 提交于
The original memory less node allocation attempted to use NODEDATA_ALIGN for alignment. The bootmem allocator only allows a power of two alignments. This causes a BUG_ON for some nodes. For cpu only nodes just allocate with a PERCPU_PAGE_SIZE alignment. Some older firmware reports SLIT distances of 0xff and results in bestnode not being computed. This is now treated correctly. The failed allocation check was removed because it's redundant. The bootmem allocator already makes this check. This fix has been boot tested on 4 node machine which has 4 cpu only nodes and 1 memory node. Thanks to Pete Keilty for reporting this and helping me test it. Signed-off-by: NBob Picco <bob.picco@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 30 10月, 2005 1 次提交
-
-
由 Dave Hansen 提交于
pgdat->node_size_lock is basically only neeeded in one place in the normal code: show_mem(), which is the arch-specific sysrq-m printing function. Strictly speaking, the architectures not doing memory hotplug do no need this locking in show_mem(). However, they are all included for completeness. This should also make any future consolidation of all of the implementations a little more straightforward. This lock is also held in the sparsemem code during a memory removal, as sections are invalidated. This is the place there pfn_valid() is made false for a memory area that's being removed. The lock is only required when doing pfn_valid() operations on memory which the user does not already have a reference on the page, such as in show_mem(). Signed-off-by: NDave Hansen <haveblue@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 05 10月, 2005 1 次提交
-
-
由 Bob Picco 提交于
This patch is the minimal set of changes required by ia64 to use SPARSEMEM. Signed-off-by: NBob Picco <bob.picco@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 07 7月, 2005 2 次提交
-
-
由 Tony Luck 提交于
Jesse Barnes provided the original version of this patch months ago, but other changes kept conflicting with it, so it got deferred. Greg Edwards dug it out of obscurity just over a week ago, and almost immediately another conflicting patch appeared (Bob Picco's memory-less nodes). I've resolved the conflicts and got it running again. CONFIG_SGI_TIOCX is set to "y" in defconfig, which causes a Tiger to not boot (oops in tiocx_init). But that can be resolved later ... get this in now before it gets stale again. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 bob.picco 提交于
I reworked how nodes with only CPUs are treated. The patch below seems simpler to me and has eliminated the complicated routine reassign_cpu_only_nodes. There isn't any longer the requirement to modify ACPI NUMA information which was in large part the complexity introduced in reassign_cpu_only_nodes. This patch will produce a different number of nodes. For example, reassign_cpu_only_nodes would reduce two CPUonly nodes and one memory node configuration to one memory+CPUs node configuration. This patch doesn't change the number of nodes which means the user will see three. Two nodes without memory and one node with all the memory. While doing this patch, I noticed that early_nr_phys_cpus_node isn't serving any useful purpose. It is called once in find_pernode_space but the value isn't used to computer pernode space. Signed-off-by: Nbob.picco <bob.picco@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 24 6月, 2005 1 次提交
-
-
由 Dave Hansen 提交于
This patch effectively eliminates direct use of pgdat->node_mem_map outside of the DISCONTIG code. On a flat memory system, these fields aren't currently used, neither are they on a sparsemem system. There was also a node_mem_map(nid) macro on many architectures. Its use along with the use of ->node_mem_map itself was not consistent. It has been removed in favor of two new, more explicit, arch-independent macros: pgdat_page_nr(pgdat, pagenr) nid_page_nr(nid, pagenr) I called them "pgdat" and "nid" because we overload the term "node" to mean "NUMA node", "DISCONTIG node" or "pg_data_t" in very confusing ways. I believe the newer names are much clearer. These macros can be overridden in the sparsemem case with a theoretically slower operation using node_start_pfn and pfn_to_page(), instead. We could make this the only behavior if people want, but I don't want to change too much at once. One thing at a time. This patch removes more code than it adds. Compile tested on alpha, alpha discontig, arm, arm-discontig, i386, i386 generic, NUMAQ, Summit, ppc64, ppc64 discontig, and x86_64. Full list here: http://sr71.net/patches/2.6.12/2.6.12-rc1-mhp2/configs/ Boot tested on NUMAQ, x86 SMP and ppc64 power4/5 LPARs. Signed-off-by: NDave Hansen <haveblue@us.ibm.com> Signed-off-by: NMartin J. Bligh <mbligh@aracnet.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 26 4月, 2005 1 次提交
-
-
由 Robin Holt 提交于
This patch introduces using the quicklists for pgd, pmd, and pte levels by combining the alloc and free functions into a common set of routines. This greatly simplifies the reading of this header file. This patch is simple but necessary for large numa configurations. It simply ensures that only pages from the local node are added to a cpus quicklist. This prevents the trapping of pages on a remote nodes quicklist by starting a process, touching a large number of pages to fill pmd and pte entries, migrating to another node, and then unmapping or exiting. With those conditions, the pages get trapped and if the machine has more than 100 nodes of the same size, the calculation of the pgtable high water mark will be larger than any single node so page table cache flushing will never occur. I ran lmbench lat_proc fork and lat_proc exec on a zx1 with and without this patch and did not notice any change. On an sn2 machine, there was a slight improvement which is possibly due to pages from other nodes trapped on the test node before starting the run. I did not investigate further. This patch shrinks the quicklist based upon free memory on the node instead of the high/low water marks. I have written it to enable preemption periodically and recalculate the amount to shrink every time we have freed enough pages that the quicklist size should have grown. I rescan the nodes zones each pass because other processess may be draining node memory at the same time as we are adding. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-