1. 25 2月, 2006 1 次提交
  2. 18 2月, 2006 1 次提交
  3. 16 2月, 2006 1 次提交
  4. 15 2月, 2006 2 次提交
  5. 12 2月, 2006 1 次提交
    • U
      [PATCH] fstatat64 support · cff2b760
      Ulrich Drepper 提交于
      The *at patches introduced fstatat and, due to inusfficient research, I
      used the newfstat functions generally as the guideline.  The result is that
      on 32-bit platforms we don't have all the information needed to implement
      fstatat64.
      
      This patch modifies the code to pass up 64-bit information if
      __ARCH_WANT_STAT64 is defined.  I renamed the syscall entry point to make
      this clear.  Other archs will continue to use the existing code.  On x86-64
      the compat code is implemented using a new sys32_ function.  this is what
      is done for the other stat syscalls as well.
      
      This patch might break some other archs (those which define
      __ARCH_WANT_STAT64 and which already wired up the syscall).  Yet others
      might need changes to accomodate the compatibility mode.  I really don't
      want to do that work because all this stat handling is a mess (more so in
      glibc, but the kernel is also affected).  It should be done by the arch
      maintainers.  I'll provide some stand-alone test shortly.  Those who are
      eager could compile glibc and run 'make check' (no installation needed).
      
      The patch below has been tested on x86 and x86-64.
      Signed-off-by: NUlrich Drepper <drepper@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      cff2b760
  6. 08 2月, 2006 1 次提交
  7. 06 2月, 2006 1 次提交
  8. 04 2月, 2006 1 次提交
    • Z
      [PATCH] Export cpu topology in sysfs · 69dcc991
      Zhang, Yanmin 提交于
      The patch implements cpu topology exportation by sysfs.
      
      Items (attributes) are similar to /proc/cpuinfo.
      
      1) /sys/devices/system/cpu/cpuX/topology/physical_package_id:
      	represent the physical package id of  cpu X;
      2) /sys/devices/system/cpu/cpuX/topology/core_id:
      	represent the cpu core id to cpu X;
      3) /sys/devices/system/cpu/cpuX/topology/thread_siblings:
      	represent the thread siblings to cpu X in the same core;
      4) /sys/devices/system/cpu/cpuX/topology/core_siblings:
      	represent the thread siblings to cpu X in the same physical package;
      
      To implement it in an architecture-neutral way, a new source file,
      driver/base/topology.c, is to export the 5 attributes.
      
      If one architecture wants to support this feature, it just needs to
      implement 4 defines, typically in file include/asm-XXX/topology.h.
      The 4 defines are:
      #define topology_physical_package_id(cpu)
      #define topology_core_id(cpu)
      #define topology_thread_siblings(cpu)
      #define topology_core_siblings(cpu)
      
      The type of **_id is int.
      The type of siblings is cpumask_t.
      
      To be consistent on all architectures, the 4 attributes should have
      deafult values if their values are unavailable. Below is the rule.
      
      1) physical_package_id: If cpu has no physical package id, -1 is the
      default value.
      
      2) core_id: If cpu doesn't support multi-core, its core id is 0.
      
      3) thread_siblings: Just include itself, if the cpu doesn't support
      HT/multi-thread.
      
      4) core_siblings: Just include itself, if the cpu doesn't support
      multi-core and HT/Multi-thread.
      
      So be careful when declaring the 4 defines in include/asm-XXX/topology.h.
      
      If an attribute isn't defined on an architecture, it won't be exported.
      
      Thank Nathan, Greg, Andi, Paul and Venki.
      
      The patch provides defines for i386/x86_64/ia64.
      Signed-off-by: NZhang, Yanmin <yanmin.zhang@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Greg KH <greg@kroah.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      69dcc991
  9. 02 2月, 2006 1 次提交
    • M
      [PATCH] VMSPLIT config options · 975b3d3d
      Mark Lord 提交于
      Enable selection of different user/kernel VM splits for i386, including an
      optimized mode for 1GB physical RAM, which gives the kernel a direct (non
      HIGHMEM) mapping to the entire 1GB rather than just the first 896MB.
      
      There is a similarly a similarly optimized mode for machines with exactly 2GB
      of physical RAM.
      
      This can speed up the kernel by avoiding having to create/destroy temporary
      HIGHMEM mappings, and by not having to include HIGHMEM support at all on such
      machines.  The flip side is that there's less virtual addressing left for
      userspace in these alternatives, and some binary-only kernel modules may
      misbehave unless rebuilt with the same VMSPLIT option as the main kernel
      image.
      
      Original idea/patch from Jens Axboe, modified based on suggestions from Linus
      et al.
      Signed-off-by: NMark Lord <mlord@pobox.com>
      Signed-off-by: NJens Axboe <axboe@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      975b3d3d
  10. 19 1月, 2006 6 次提交
  11. 15 1月, 2006 1 次提交
  12. 13 1月, 2006 6 次提交
    • A
      [PATCH] death of get_thread_info/put_thread_info · f5a61d0c
      Al Viro 提交于
      {get,put}_thread_info() were introduced in 2.5.4 and never
      had been called by anything in the tree.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f5a61d0c
    • A
      [PATCH] i386: task_stack_page() · 65e0fdff
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      65e0fdff
    • A
      [PATCH] i386: fix task_pt_regs() · 07b047fc
      akpm@osdl.org 提交于
      )
      
      From: Al Viro <viro@ftp.linux.org.uk>
      
      task_pt_regs() needs the same offset-by-8 to match copy_thread()
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      07b047fc
    • A
      [PATCH] i386: task_thread_info() · 06b425d8
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      06b425d8
    • A
      [PATCH] scheduler cache-hot-autodetect · 198e2f18
      akpm@osdl.org 提交于
      )
      
      From: Ingo Molnar <mingo@elte.hu>
      
      This is the latest version of the scheduler cache-hot-auto-tune patch.
      
      The first problem was that detection time scaled with O(N^2), which is
      unacceptable on larger SMP and NUMA systems. To solve this:
      
      - I've added a 'domain distance' function, which is used to cache
        measurement results. Each distance is only measured once. This means
        that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT
        distances 0 and 1, and on SMP distance 0 is measured. The code walks
        the domain tree to determine the distance, so it automatically follows
        whatever hierarchy an architecture sets up. This cuts down on the boot
        time significantly and removes the O(N^2) limit. The only assumption
        is that migration costs can be expressed as a function of domain
        distance - this covers the overwhelming majority of existing systems,
        and is a good guess even for more assymetric systems.
      
        [ People hacking systems that have assymetries that break this
          assumption (e.g. different CPU speeds) should experiment a bit with
          the cpu_distance() function. Adding a ->migration_distance factor to
          the domain structure would be one possible solution - but lets first
          see the problem systems, if they exist at all. Lets not overdesign. ]
      
      Another problem was that only a single cache-size was used for measuring
      the cost of migration, and most architectures didnt set that variable
      up. Furthermore, a single cache-size does not fit NUMA hierarchies with
      L3 caches and does not fit HT setups, where different CPUs will often
      have different 'effective cache sizes'. To solve this problem:
      
      - Instead of relying on a single cache-size provided by the platform and
        sticking to it, the code now auto-detects the 'effective migration
        cost' between two measured CPUs, via iterating through a wide range of
        cachesizes. The code searches for the maximum migration cost, which
        occurs when the working set of the test-workload falls just below the
        'effective cache size'. I.e. real-life optimized search is done for
        the maximum migration cost, between two real CPUs.
      
        This, amongst other things, has the positive effect hat if e.g. two
        CPUs share a L2/L3 cache, a different (and accurate) migration cost
        will be found than between two CPUs on the same system that dont share
        any caches.
      
      (The reliable measurement of migration costs is tricky - see the source
      for details.)
      
      Furthermore i've added various boot-time options to override/tune
      migration behavior.
      
      Firstly, there's a blanket override for autodetection:
      
      	migration_cost=1000,2000,3000
      
      will override the depth 0/1/2 values with 1msec/2msec/3msec values.
      
      Secondly, there's a global factor that can be used to increase (or
      decrease) the autodetected values:
      
      	migration_factor=120
      
      will increase the autodetected values by 20%. This option is useful to
      tune things in a workload-dependent way - e.g. if a workload is
      cache-insensitive then CPU utilization can be maximized by specifying
      migration_factor=0.
      
      I've tested the autodetection code quite extensively on x86, on 3
      P3/Xeon/2MB, and the autodetected values look pretty good:
      
      Dual Celeron (128K L2 cache):
      
       ---------------------
       migration cost matrix (max_cache_size: 131072, cpu: 467 MHz):
       ---------------------
                 [00]    [01]
       [00]:     -     1.7(1)
       [01]:   1.7(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (0) 1.7 (1784008)
       ---------------------
      
      Here the slow memory subsystem dominates system performance, and even
      though caches are small, the migration cost is 1.7 msecs.
      
      Dual HT P4 (512K L2 cache):
      
       ---------------------
       migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz):
       ---------------------
                 [00]    [01]    [02]    [03]
       [00]:     -     0.4(1)  0.0(0)  0.4(1)
       [01]:   0.4(1)    -     0.4(1)  0.0(0)
       [02]:   0.0(0)  0.4(1)    -     0.4(1)
       [03]:   0.4(1)  0.0(0)  0.4(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (33900) 0.4 (448514)
       ---------------------
      
      Here it can be seen that there is no migration cost between two HT
      siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory
      system makes inter-physical-CPU migration pretty cheap: 0.4 msecs.
      
      8-way P3/Xeon [2MB L2 cache]:
      
       ---------------------
       migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz):
       ---------------------
                 [00]    [01]    [02]    [03]    [04]    [05]    [06]    [07]
       [00]:     -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [01]:  19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [02]:  19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [03]:  19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [04]:  19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1)
       [05]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1)
       [06]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1)
       [07]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (0) 19.2 (19281756)
       ---------------------
      
      This one has huge caches and a relatively slow memory subsystem - so the
      migration cost is 19 msecs.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAshok Raj <ashok.raj@intel.com>
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Cc: <wilder@us.ibm.com>
      Signed-off-by: NJohn Hawkes <hawkes@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      198e2f18
    • I
      [PATCH] sched: add cacheflush() asm · 4dc7a0bb
      Ingo Molnar 提交于
      Add per-arch sched_cacheflush() which is a write-back cacheflush used by
      the migration-cost calibration code at bootup time.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4dc7a0bb
  13. 12 1月, 2006 9 次提交
  14. 11 1月, 2006 8 次提交