1. 14 1月, 2006 10 次提交
  2. 13 1月, 2006 4 次提交
    • A
      [PATCH] ia64: task_pt_regs() · 6450578f
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6450578f
    • A
      [PATCH] ia64: task_thread_info() · ab03591d
      Al Viro 提交于
      on ia64 thread_info is at the constant offset from task_struct and stack
      is embedded into the same beast.  Set __HAVE_THREAD_FUNCTIONS, made
      task_thread_info() just add a constant.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ab03591d
    • A
      [PATCH] scheduler cache-hot-autodetect · 198e2f18
      akpm@osdl.org 提交于
      )
      
      From: Ingo Molnar <mingo@elte.hu>
      
      This is the latest version of the scheduler cache-hot-auto-tune patch.
      
      The first problem was that detection time scaled with O(N^2), which is
      unacceptable on larger SMP and NUMA systems. To solve this:
      
      - I've added a 'domain distance' function, which is used to cache
        measurement results. Each distance is only measured once. This means
        that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT
        distances 0 and 1, and on SMP distance 0 is measured. The code walks
        the domain tree to determine the distance, so it automatically follows
        whatever hierarchy an architecture sets up. This cuts down on the boot
        time significantly and removes the O(N^2) limit. The only assumption
        is that migration costs can be expressed as a function of domain
        distance - this covers the overwhelming majority of existing systems,
        and is a good guess even for more assymetric systems.
      
        [ People hacking systems that have assymetries that break this
          assumption (e.g. different CPU speeds) should experiment a bit with
          the cpu_distance() function. Adding a ->migration_distance factor to
          the domain structure would be one possible solution - but lets first
          see the problem systems, if they exist at all. Lets not overdesign. ]
      
      Another problem was that only a single cache-size was used for measuring
      the cost of migration, and most architectures didnt set that variable
      up. Furthermore, a single cache-size does not fit NUMA hierarchies with
      L3 caches and does not fit HT setups, where different CPUs will often
      have different 'effective cache sizes'. To solve this problem:
      
      - Instead of relying on a single cache-size provided by the platform and
        sticking to it, the code now auto-detects the 'effective migration
        cost' between two measured CPUs, via iterating through a wide range of
        cachesizes. The code searches for the maximum migration cost, which
        occurs when the working set of the test-workload falls just below the
        'effective cache size'. I.e. real-life optimized search is done for
        the maximum migration cost, between two real CPUs.
      
        This, amongst other things, has the positive effect hat if e.g. two
        CPUs share a L2/L3 cache, a different (and accurate) migration cost
        will be found than between two CPUs on the same system that dont share
        any caches.
      
      (The reliable measurement of migration costs is tricky - see the source
      for details.)
      
      Furthermore i've added various boot-time options to override/tune
      migration behavior.
      
      Firstly, there's a blanket override for autodetection:
      
      	migration_cost=1000,2000,3000
      
      will override the depth 0/1/2 values with 1msec/2msec/3msec values.
      
      Secondly, there's a global factor that can be used to increase (or
      decrease) the autodetected values:
      
      	migration_factor=120
      
      will increase the autodetected values by 20%. This option is useful to
      tune things in a workload-dependent way - e.g. if a workload is
      cache-insensitive then CPU utilization can be maximized by specifying
      migration_factor=0.
      
      I've tested the autodetection code quite extensively on x86, on 3
      P3/Xeon/2MB, and the autodetected values look pretty good:
      
      Dual Celeron (128K L2 cache):
      
       ---------------------
       migration cost matrix (max_cache_size: 131072, cpu: 467 MHz):
       ---------------------
                 [00]    [01]
       [00]:     -     1.7(1)
       [01]:   1.7(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (0) 1.7 (1784008)
       ---------------------
      
      Here the slow memory subsystem dominates system performance, and even
      though caches are small, the migration cost is 1.7 msecs.
      
      Dual HT P4 (512K L2 cache):
      
       ---------------------
       migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz):
       ---------------------
                 [00]    [01]    [02]    [03]
       [00]:     -     0.4(1)  0.0(0)  0.4(1)
       [01]:   0.4(1)    -     0.4(1)  0.0(0)
       [02]:   0.0(0)  0.4(1)    -     0.4(1)
       [03]:   0.4(1)  0.0(0)  0.4(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (33900) 0.4 (448514)
       ---------------------
      
      Here it can be seen that there is no migration cost between two HT
      siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory
      system makes inter-physical-CPU migration pretty cheap: 0.4 msecs.
      
      8-way P3/Xeon [2MB L2 cache]:
      
       ---------------------
       migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz):
       ---------------------
                 [00]    [01]    [02]    [03]    [04]    [05]    [06]    [07]
       [00]:     -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [01]:  19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [02]:  19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [03]:  19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [04]:  19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1)
       [05]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1)
       [06]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1)
       [07]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (0) 19.2 (19281756)
       ---------------------
      
      This one has huge caches and a relatively slow memory subsystem - so the
      migration cost is 19 msecs.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAshok Raj <ashok.raj@intel.com>
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Cc: <wilder@us.ibm.com>
      Signed-off-by: NJohn Hawkes <hawkes@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      198e2f18
    • I
      [PATCH] sched: add cacheflush() asm · 4dc7a0bb
      Ingo Molnar 提交于
      Add per-arch sched_cacheflush() which is a write-back cacheflush used by
      the migration-cost calibration code at bootup time.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4dc7a0bb
  3. 12 1月, 2006 2 次提交
  4. 11 1月, 2006 3 次提交
  5. 09 1月, 2006 5 次提交
    • B
      [PATCH] /dev/mem: validate mmap requests · 80851ef2
      Bjorn Helgaas 提交于
      Add a hook so architectures can validate /dev/mem mmap requests.
      
      This is analogous to validation we already perform in the read/write
      paths.
      
      The identity mapping scheme used on ia64 requires that each 16MB or
      64MB granule be accessed with exactly one attribute (write-back or
      uncacheable).  This avoids "attribute aliasing", which can cause a
      machine check.
      
      Sample problem scenario:
        - Machine supports VGA, so it has uncacheable (UC) MMIO at 640K-768K
        - efi_memmap_init() discards any write-back (WB) memory in the first granule
        - Application (e.g., "hwinfo") mmaps /dev/mem, offset 0
        - hwinfo receives UC mapping (the default, since memmap says "no WB here")
        - Machine check abort (on chipsets that don't support UC access to WB
          memory, e.g., sx1000)
      
      In the scenario above, the only choices are
        - Use WB for hwinfo mmap.  Can't do this because it causes attribute
          aliasing with the UC mapping for the VGA MMIO space.
        - Use UC for hwinfo mmap.  Can't do this because the chipset may not
          support UC for that region.
        - Disallow the hwinfo mmap with -EINVAL.  That's what this patch does.
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      80851ef2
    • A
      [PATCH] remove gcc-2 checks · a1365647
      Andrew Morton 提交于
      Remove various things which were checking for gcc-1.x and gcc-2.x compilers.
      
      From: Adrian Bunk <bunk@stusta.de>
      
          Some documentation updates and removes some code paths for gcc < 3.2.
      Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a1365647
    • C
      [PATCH] use ptrace_get_task_struct in various places · 6b9c7ed8
      Christoph Hellwig 提交于
      The ptrace_get_task_struct() helper that I added as part of the ptrace
      consolidation is useful in variety of places that currently opencode it.
      Switch them to the common helpers.
      
      Add a ptrace_traceme() helper that needs to be explicitly called, and simplify
      the ptrace_get_task_struct() interface.  We don't need the request argument
      now, and we return the task_struct directly, using ERR_PTR() for error
      returns.  It's a bit more code in the callers, but we have two sane routines
      that do one thing well now.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6b9c7ed8
    • C
      [PATCH] Swap Migration V5: sys_migrate_pages interface · 39743889
      Christoph Lameter 提交于
      sys_migrate_pages implementation using swap based page migration
      
      This is the original API proposed by Ray Bryant in his posts during the first
      half of 2005 on linux-mm@kvack.org and linux-kernel@vger.kernel.org.
      
      The intent of sys_migrate is to migrate memory of a process.  A process may
      have migrated to another node.  Memory was allocated optimally for the prior
      context.  sys_migrate_pages allows to shift the memory to the new node.
      
      sys_migrate_pages is also useful if the processes available memory nodes have
      changed through cpuset operations to manually move the processes memory.  Paul
      Jackson is working on an automated mechanism that will allow an automatic
      migration if the cpuset of a process is changed.  However, a user may decide
      to manually control the migration.
      
      This implementation is put into the policy layer since it uses concepts and
      functions that are also needed for mbind and friends.  The patch also provides
      a do_migrate_pages function that may be useful for cpusets to automatically
      move memory.  sys_migrate_pages does not modify policies in contrast to Ray's
      implementation.
      
      The current code here is based on the swap based page migration capability and
      thus is not able to preserve the physical layout relative to it containing
      nodeset (which may be a cpuset).  When direct page migration becomes available
      then the implementation needs to be changed to do a isomorphic move of pages
      between different nodesets.  The current implementation simply evicts all
      pages in source nodeset that are not in the target nodeset.
      
      Patch supports ia64, i386 and x86_64.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      39743889
    • S
      kbuild: remove GCC_VERSION · ad14336d
      Sam Ravnborg 提交于
      This was causing some ordering problems.  Remove the up-front evaluation
      and just revaluate the compiler version each time we need it.
      
      (The up-front evaluation was problematic because some architectures modify
      the value of $(CC)).
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      ad14336d
  6. 06 1月, 2006 1 次提交
  7. 05 1月, 2006 1 次提交
  8. 04 1月, 2006 1 次提交
  9. 17 12月, 2005 5 次提交
  10. 16 12月, 2005 1 次提交
  11. 15 12月, 2005 1 次提交
  12. 13 12月, 2005 2 次提交
  13. 07 12月, 2005 4 次提交
    • V
      [CPUFREQ] CPU frequency display in /proc/cpuinfo · 95235ca2
      Venkatesh Pallipadi 提交于
      What is the value shown in "cpu MHz" of /proc/cpuinfo when CPUs are capable of
      changing frequency?
      
      Today the answer is: It depends.
      On i386:
      SMP kernel - It is always the boot frequency
      UP kernel - Scales with the frequency change and shows that was last set.
      
      On x86_64:
      There is one single variable cpu_khz that gets written by all the CPUs. So,
      the frequency set by last CPU will be seen on /proc/cpuinfo of all the
      CPUs in the system. What you see also depends on whether you have constant_tsc
      capable CPU or not.
      
      On ia64:
      It is always boot time frequency of a particular CPU that gets displayed.
      
      The patch below changes this to:
      Show the last known frequency of the particular CPU, when cpufreq is present. If
      cpu doesnot support changing of frequency through cpufreq, then boot frequency
      will be shown. The patch affects i386, x86_64 and ia64 architectures.
      
      Signed-off-by: Venkatesh Pallipadi<venkatesh.pallipadi@intel.com>
      Signed-off-by: NDave Jones <davej@redhat.com>
      95235ca2
    • J
      [IA64-SGI] Fix SN PTC deadlock recovery · 590711b7
      Jack Steiner 提交于
      The patch that added support for a new platform chipset (shub2) broke
      PTC deadlock recovery on older versions of the chipset. (PTCs are the
      SN platform-specific method for doing a global TLB purge). This
      patch fixes deadlock recovery so that it works on both the old & new
      chipsets.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      590711b7
    • R
      [IA64] Change SET_PERSONALITY to comply with comment in binfmt_elf.c. · bd1d6e24
      Robin Holt 提交于
      We have a customer application which trips a bug.  The problem arises
      when a driver attempts to call do_munmap on an area which is mapped, but
      because current->thread.task_size has been set to 0xC0000000, the call
      to do_munmap fails thinking it is an unmap beyond the user's address
      space.
      
      The comment in fs/binfmt_elf.c in load_elf_library() before the call
      to SET_PERSONALITY() indicates that task_size must not be changed for
      the running application until flush_thread, but is for ia64 executing
      ia32 binaries.
      
      This patch moves the setting of task_size from SET_PERSONALITY() to
      flush_thread() as indicated.  The customer application no longer is able
      to trip the bug.
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      bd1d6e24
    • J
      [IA64] Limit the maximum NODEDATA_ALIGN() offset · acb7f672
      Jack Steiner 提交于
      The per-node data structures are allocated with strided offsets that are a
      function of the node number. This prevents excessive cache-aliasing from
      occurring.
      
      On systems with a large number of nodes, the strided offset becomes
      too large. This patch restricts the maximum offset to 32MB. This is far larger
      than the size of any current L3 cache.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      acb7f672