1. 17 2月, 2011 4 次提交
    • T
      x86-64, NUMA: Kill numa_nodes[] · 91556237
      Tejun Heo 提交于
      numa_nodes[] doesn't carry any information which isn't present in
      numa_meminfo.  Each entry is simply min/max range of all the memblks
      for the node.  This is not only redundant but also inaccurate when
      memblks for different nodes interleave - for example,
      find_node_by_addr() can return the wrong nodeid.
      
      Kill numa_nodes[] and always use numa_meminfo instead.
      
      * nodes_cover_memory() is renamed to numa_meminfo_cover_memory() and
        now operations on numa_meminfo and returns bool.
      
      * setup_node_bootmem() needs min/max range.  Compute the range on the
        fly.  setup_node_bootmem() invocation is restructured to use outer
        loop instead of hardcoding the double invocations.
      
      * find_node_by_addr() now operates on numa_meminfo.
      
      * setup_physnodes() builds physnodes[] from memblks.  This will go
        away when emulation code is updated to use struct numa_meminfo.
      
      This patch also makes the following misc changes.
      
      * Clearing of nodes_add[] clearing is converted to memset().
      
      * numa_add_memblk() in amd_numa_init() is moved down a bit for
        consistency.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      91556237
    • T
      x86-64, NUMA: Add common find_node_by_addr() · a844ef46
      Tejun Heo 提交于
      srat_64.c and amdtopology_64.c had their own versions of
      find_node_by_addr() which were basically the same.  Add common one in
      numa_64.c and remove the duplicates.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      a844ef46
    • T
      x86-64, NUMA: Unify use of memblk in all init methods · 43a662f0
      Tejun Heo 提交于
      Make both amd and dummy use numa_add_memblk() to describe the detected
      memory blocks.  This allows initmem_init() to call
      numa_register_memblk() regardless of init method in use.  Drop custom
      memory registration codes from amd and dummy.
      
      After this change, memblk merge/cleanup in numa_register_memblks() is
      applied to all init methods.
      
      As this makes compute_hash_shift() and numa_register_memblks() used
      only inside numa_64.c, make them static.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      43a662f0
    • T
      x86-64, NUMA: Factor out memblk handling into numa_{add|register}_memblk() · ef396ec9
      Tejun Heo 提交于
      Factor out memblk handling from srat_64.c into two functions in
      numa_64.c.  This patch doesn't introduce any behavior change.  The
      next patch will make all init methods use these functions.
      
      - v2: Fixed build failure on 32bit due to misplaced NR_NODE_MEMBLKS.
            Reported by Ingo.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      ef396ec9
  2. 16 2月, 2011 2 次提交
    • T
      x86-64, NUMA: Use common numa_nodes[] · 206e4208
      Tejun Heo 提交于
      ACPI and amd are using separate nodes[] array.  Add numa_nodes[] and
      use them in all NUMA init methods.  cutoff_node() cleanup is moved
      from srat_64.c to numa_64.c and applied in initmem_init() regardless
      of init methods.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      206e4208
    • T
      x86-64, NUMA: Use common {cpu|mem}_nodes_parsed · ec8cf29b
      Tejun Heo 提交于
      ACPI and amd are using separate nodes_parsed masks.  Add
      {cpu|mem}_nodes_parsed and use them in all NUMA init methods.
      Initialization of the masks and building node_possible_map are now
      handled commonly by initmem_init().
      
      dummy_numa_init() is updated to set node 0 on both masks.  While at
      it, move the info messages from scan to init.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      ec8cf29b
  3. 28 1月, 2011 4 次提交
    • T
      x86: Unify NUMA initialization between 32 and 64bit · 8db78cc4
      Tejun Heo 提交于
      Now that everything else is unified, NUMA initialization can be
      unified too.
      
      * numa_init_array() and init_cpu_to_node() are moved from
        numa_64 to numa.
      
      * numa_32::initmem_init() is updated to call numa_init_array()
        and setup_arch() to call init_cpu_to_node() on 32bit too.
      
      * x86_cpu_to_node_map is now initialized to NUMA_NO_NODE on
        32bit too. This is safe now as numa_init_array() will initialize
        it early during boot.
      
      This makes NUMA mapping fully initialized before
      setup_per_cpu_areas() on 32bit too and thus makes the first
      percpu chunk which contains all the static variables and some of
      dynamic area allocated with NUMA affinity correctly considered.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-17-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Reported-by: NEric Dumazet <eric.dumazet@gmail.com>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      8db78cc4
    • T
      x86: Unify node_to_cpumask_map handling between 32 and 64bit · de2d9445
      Tejun Heo 提交于
      x86_32 has been managing node_to_cpumask_map explicitly from
      map_cpu_to_node() and friends in a rather ugly way.  With
      previous changes, it's now possible to share the code with
      64bit.
      
      * When CONFIG_NUMA_EMU is disabled, numa_add/remove_cpu() are
        implemented in numa.c and shared by 32 and 64bit.  CONFIG_NUMA_EMU
        versions still live in numa_64.c.
      
        NUMA_EMU's dependency on 64bit is planned to be removed and the
        above should go away together.
      
      * identify_cpu() now calls numa_add_cpu() for 32bit too.  This
        makes the explicit mask management from map_cpu_to_node() unnecessary.
      
      * The whole x86_32 specific map_cpu_to_node() chunk is no longer
        necessary.  Dropped.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-16-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      de2d9445
    • T
      x86: Unify CPU -> NUMA node mapping between 32 and 64bit · 645a7919
      Tejun Heo 提交于
      Unlike 64bit, 32bit has been using its own cpu_to_node_map[] for
      CPU -> NUMA node mapping.  Replace it with early_percpu variable
      x86_cpu_to_node_map and share the mapping code with 64bit.
      
      * USE_PERCPU_NUMA_NODE_ID is now enabled for 32bit too.
      
      * x86_cpu_to_node_map and numa_set/clear_node() are moved from
        numa_64 to numa.  For now, on 32bit, x86_cpu_to_node_map is initialized
        with 0 instead of NUMA_NO_NODE.  This is to avoid introducing unexpected
        behavior change and will be updated once init path is unified.
      
      * srat_detect_node() is now enabled for x86_32 too.  It calls
        numa_set_node() and initializes the mapping making explicit
        cpu_to_node_map[] updates from map/unmap_cpu_to_node() unnecessary.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: penberg@kernel.org
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-15-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      645a7919
    • T
      x86: Unify cpu/apicid <-> NUMA node mapping between 32 and 64bit · bbc9e2f4
      Tejun Heo 提交于
      The mapping between cpu/apicid and node is done via
      apicid_to_node[] on 64bit and apicid_2_node[] +
      apic->x86_32_numa_cpu_node() on 32bit. This difference makes it
      difficult to further unify 32 and 64bit NUMA handling.
      
      This patch unifies it by replacing both apicid_to_node[] and
      apicid_2_node[] with __apicid_to_node[] array, which is accessed
      by two accessors - set_apicid_to_node() and numa_cpu_node().  On
      64bit, numa_cpu_node() always consults __apicid_to_node[]
      directly while 32bit goes through apic->numa_cpu_node() method
      to allow apic implementations to override it.
      
      srat_detect_node() for amd cpus contains workaround for broken
      NUMA configuration which assumes relationship between APIC ID,
      HT node ID and NUMA topology.  Leave it to access
      __apicid_to_node[] directly as mapping through CPU might result
      in undesirable behavior change.  The comment is reformatted and
      updated to note the ugliness.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-14-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      bbc9e2f4
  4. 19 1月, 2011 1 次提交
  5. 24 12月, 2010 1 次提交
  6. 16 2月, 2010 1 次提交
    • D
      x86, numa: Add fixed node size option for numa emulation · 8df5bb34
      David Rientjes 提交于
      numa=fake=N specifies the number of fake nodes, N, to partition the
      system into and then allocates them by interleaving over physical nodes.
      This requires knowledge of the system capacity when attempting to
      allocate nodes of a certain size: either very large nodes to benchmark
      scalability of code that operates on individual nodes, or very small
      nodes to find bugs in the VM.
      
      This patch introduces numa=fake=<size>[MG] so it is possible to specify
      the size of each node to allocate.  When used, nodes of the size
      specified will be allocated and interleaved over the set of physical
      nodes.
      
      FAKE_NODE_MIN_SIZE was also moved to the more-appropriate
      include/asm/numa_64.h.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      LKML-Reference: <alpine.DEB.2.00.1002151342510.26927@chino.kir.corp.google.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      8df5bb34
  7. 18 5月, 2009 2 次提交
    • Y
      x86, mm: Fix node_possible_map logic · 7c43769a
      Yinghai Lu 提交于
      Recently there were some changes to the meaning of node_possible_map,
      and it is quite strange:
      
      - the node without memory would be set in node_possible_map
      - but some node with less NODE_MIN_SIZE will be kicked out of node_possible_map.
      
      fix it by adding strict_setup_node_bootmem().
      
      Also, remove unparse_node().
      
      so result will be:
      
      1. cpu_to_node() will return online node only (nearest one)
      2. apicid_to_node() still returns the node that could be not online but is set
         in node_possible_map.
      3. node_possible_map will include nodes that mem on it are less NODE_MIN_SIZE
      
      v2: after move_cpus_to_node change.
      
      [ Impact: get node_possible_map right ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Tested-by: NJack Steiner <steiner@sgi.com>
      LKML-Reference: <4A0C49BE.6080800@kernel.org>
      [ v3: various small cleanups and comment clarifications ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7c43769a
    • Y
      mm, x86: remove MEMORY_HOTPLUG_RESERVE related code · 888a589f
      Yinghai Lu 提交于
      after:
      
       | commit b263295d
       | Author: Christoph Lameter <clameter@sgi.com>
       | Date:   Wed Jan 30 13:30:47 2008 +0100
       |
       |    x86: 64-bit, make sparsemem vmemmap the only memory model
      
      we don't have MEMORY_HOTPLUG_RESERVE anymore.
      
      Historically, x86-64 had an architecture-specific method for memory hotplug
      whereby it scanned the SRAT for physical memory ranges that could be
      potentially used for memory hot-add later. By reserving those ranges
      without physical memory, the memmap would be allocated and left dormant
      until needed. This depended on the DISCONTIG memory model which has been
      removed so the code implementing HOTPLUG_RESERVE is now dead.
      
      This patch removes the dead code used by MEMORY_HOTPLUG_RESERVE.
      
      (Changelog authored by Mel.)
      
      v2: updated changelog, and remove hotadd= in doc
      
      [ Impact: remove dead code ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NMel Gorman <mel@csn.ul.ie>
      Workflow-found-OK-by: NAndrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4A0C4910.7090508@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      888a589f
  8. 23 10月, 2008 2 次提交
  9. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  10. 08 7月, 2008 2 次提交
    • Y
      x86: introduce initmem_init for 64 bit · 1f75d7e3
      Yinghai Lu 提交于
      Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1f75d7e3
    • M
      x86: cleanup early per cpu variables/accesses v4 · 23ca4bba
      Mike Travis 提交于
        * Introduce a new PER_CPU macro called "EARLY_PER_CPU".  This is
          used by some per_cpu variables that are initialized and accessed
          before there are per_cpu areas allocated.
      
          ["Early" in respect to per_cpu variables is "earlier than the per_cpu
          areas have been setup".]
      
          This patchset adds these new macros:
      
      	DEFINE_EARLY_PER_CPU(_type, _name, _initvalue)
      	EXPORT_EARLY_PER_CPU_SYMBOL(_name)
      	DECLARE_EARLY_PER_CPU(_type, _name)
      
      	early_per_cpu_ptr(_name)
      	early_per_cpu_map(_name, _idx)
      	early_per_cpu(_name, _cpu)
      
          The DEFINE macro defines the per_cpu variable as well as the early
          map and pointer.  It also initializes the per_cpu variable and map
          elements to "_initvalue".  The early_* macros provide access to
          the initial map (usually setup during system init) and the early
          pointer.  This pointer is initialized to point to the early map
          but is then NULL'ed when the actual per_cpu areas are setup.  After
          that the per_cpu variable is the correct access to the variable.
      
          The early_per_cpu() macro is not very efficient but does show how to
          access the variable if you have a function that can be called both
          "early" and "late".  It tests the early ptr to be NULL, and if not
          then it's still valid.  Otherwise, the per_cpu variable is used
          instead:
      
      	#define early_per_cpu(_name, _cpu) 			\
      		(early_per_cpu_ptr(_name) ?			\
      			early_per_cpu_ptr(_name)[_cpu] :	\
      			per_cpu(_name, _cpu))
      
          A better method is to actually check the pointer manually.  In the
          case below, numa_set_node can be called both "early" and "late":
      
      	void __cpuinit numa_set_node(int cpu, int node)
      	{
      	    int *cpu_to_node_map = early_per_cpu_ptr(x86_cpu_to_node_map);
      
      	    if (cpu_to_node_map)
      		    cpu_to_node_map[cpu] = node;
      	    else
      		    per_cpu(x86_cpu_to_node_map, cpu) = node;
      	}
      
        * Add a flag "arch_provides_topology_pointers" that indicates pointers
          to topology cpumask_t maps are available.  Otherwise, use the function
          returning the cpumask_t value.  This is useful if cpumask_t set size
          is very large to avoid copying data on to/off of the stack.
      
        * The coverage of CONFIG_DEBUG_PER_CPU_MAPS has been increased while
          the non-debug case has been optimized a bit.
      
        * Remove an unreferenced compiler warning in drivers/base/topology.c
      
        * Clean up #ifdef in setup.c
      
      For inclusion into sched-devel/latest tree.
      
      Based on:
      	git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
          +   sched-devel/latest  .../mingo/linux-2.6-sched-devel.git
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      23ca4bba
  11. 20 4月, 2008 1 次提交
  12. 17 4月, 2008 1 次提交
  13. 30 1月, 2008 6 次提交
  14. 18 10月, 2007 1 次提交
  15. 11 10月, 2007 1 次提交
  16. 23 6月, 2006 1 次提交
  17. 11 4月, 2006 1 次提交
    • Y
      [PATCH] Configurable NODES_SHIFT · c80d79d7
      Yasunori Goto 提交于
      Current implementations define NODES_SHIFT in include/asm-xxx/numnodes.h for
      each arch.  Its definition is sometimes configurable.  Indeed, ia64 defines 5
      NODES_SHIFT values in the current git tree.  But it looks a bit messy.
      
      SGI-SN2(ia64) system requires 1024 nodes, and the number of nodes already has
      been changeable by config.  Suitable node's number may be changed in the
      future even if it is other architecture.  So, I wrote configurable node's
      number.
      
      This patch set defines just default value for each arch which needs multi
      nodes except ia64.  But, it is easy to change to configurable if necessary.
      
      On ia64 the number of nodes can be already configured in generic ia64 and SN2
      config.  But, NODES_SHIFT is defined for DIG64 and HP'S machine too.  So, I
      changed it so that all platforms can be configured via CONFIG_NODES_SHIFT.  It
      would be simpler.
      
      See also: http://marc.theaimsgroup.com/?l=linux-kernel&m=114358010523896&w=2Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Jack Steiner <steiner@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c80d79d7
  18. 10 4月, 2006 1 次提交
    • A
      [PATCH] x86_64: Reserve SRAT hotadd memory on x86-64 · 68a3a7fe
      Andi Kleen 提交于
      From: Keith Mannthey, Andi Kleen
      
      Implement memory hotadd without sparsemem. The memory in the SRAT
      hotadd area is just preserved instead and can be activated later.
      
      There are a few restrictions:
      - Only one continuous hotadd area allowed per node
      
      The main problem is dealing with the many buggy SRAT tables
      that are out there. The strategy here is to reject anything
      suspicious.
      
      Originally from Keith Mannthey, with several hacks and changes by AK
      and also contributions from Andrew Morton
      
      [ TBD: Problems pointed out by KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>:
      
       1) Goto's rebuild_zonelist patch will not work if CONFIG_MEMORY_HOTPLUG=n.
      
          Rebuilding zonelist is necessary when the system has just memory <
          4G at boot, and hot add memory > 4G.  because x86_64 has DMA32,
          ZONE_NORAML is not included into zonelist at boot time if system
          doesn't have memory >4G at boot.
      
          [AK: should just force the higher zones at boot time when SRAT tells us]
      
       2) zone and node's spanned_pages and present_pages are not incremented.
          They should be.
      
          For example, our server (ia64/Fujitsu PrimeQuest) can equip memory
          from 4G to 1T(maybe 2T in future), and SRAT will *always* say we have
          possible 1T +memory.  (Microsoft requires "write all possible memory
          in SRAT") When we reserve memmap for possible 1T memory, Linux will
          not work well in +minimum 4G configuraion ;)
      
          [AK: needs limiting to 5-10% of max memory]
       ]
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      68a3a7fe
  19. 26 3月, 2006 1 次提交
  20. 08 2月, 2006 1 次提交
  21. 06 2月, 2006 1 次提交
  22. 05 2月, 2006 1 次提交
  23. 12 1月, 2006 1 次提交
  24. 15 11月, 2005 1 次提交
  25. 13 9月, 2005 1 次提交