1. 18 1月, 2006 3 次提交
  2. 17 1月, 2006 3 次提交
  3. 15 1月, 2006 1 次提交
    • P
      [PATCH] Altix: ioc3 serial support · 2d0cfb52
      Patrick Gefre 提交于
      Add driver support for a 2 port PCI IOC3-based serial card on Altix boxes:
      
      This is a re-submission.  On the original submission I was asked to
      organize the code so that the MIPS ioc3 ethernet and serial parts could be
      used with this driver.  Stanislaw Skowronek was kind enough to provide the
      shim layer for this - thanks Stanislaw.  This patch includes the shim layer
      and the Altix PCI ioc3 serial driver.  The MIPS merged ioc3 ethernet and
      serial support is forthcoming.
      Signed-off-by: NPatrick Gefre <pfg@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2d0cfb52
  4. 14 1月, 2006 7 次提交
  5. 13 1月, 2006 4 次提交
    • A
      [PATCH] ia64: task_pt_regs() · 6450578f
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6450578f
    • A
      [PATCH] ia64: task_thread_info() · ab03591d
      Al Viro 提交于
      on ia64 thread_info is at the constant offset from task_struct and stack
      is embedded into the same beast.  Set __HAVE_THREAD_FUNCTIONS, made
      task_thread_info() just add a constant.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ab03591d
    • A
      [PATCH] scheduler cache-hot-autodetect · 198e2f18
      akpm@osdl.org 提交于
      )
      
      From: Ingo Molnar <mingo@elte.hu>
      
      This is the latest version of the scheduler cache-hot-auto-tune patch.
      
      The first problem was that detection time scaled with O(N^2), which is
      unacceptable on larger SMP and NUMA systems. To solve this:
      
      - I've added a 'domain distance' function, which is used to cache
        measurement results. Each distance is only measured once. This means
        that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT
        distances 0 and 1, and on SMP distance 0 is measured. The code walks
        the domain tree to determine the distance, so it automatically follows
        whatever hierarchy an architecture sets up. This cuts down on the boot
        time significantly and removes the O(N^2) limit. The only assumption
        is that migration costs can be expressed as a function of domain
        distance - this covers the overwhelming majority of existing systems,
        and is a good guess even for more assymetric systems.
      
        [ People hacking systems that have assymetries that break this
          assumption (e.g. different CPU speeds) should experiment a bit with
          the cpu_distance() function. Adding a ->migration_distance factor to
          the domain structure would be one possible solution - but lets first
          see the problem systems, if they exist at all. Lets not overdesign. ]
      
      Another problem was that only a single cache-size was used for measuring
      the cost of migration, and most architectures didnt set that variable
      up. Furthermore, a single cache-size does not fit NUMA hierarchies with
      L3 caches and does not fit HT setups, where different CPUs will often
      have different 'effective cache sizes'. To solve this problem:
      
      - Instead of relying on a single cache-size provided by the platform and
        sticking to it, the code now auto-detects the 'effective migration
        cost' between two measured CPUs, via iterating through a wide range of
        cachesizes. The code searches for the maximum migration cost, which
        occurs when the working set of the test-workload falls just below the
        'effective cache size'. I.e. real-life optimized search is done for
        the maximum migration cost, between two real CPUs.
      
        This, amongst other things, has the positive effect hat if e.g. two
        CPUs share a L2/L3 cache, a different (and accurate) migration cost
        will be found than between two CPUs on the same system that dont share
        any caches.
      
      (The reliable measurement of migration costs is tricky - see the source
      for details.)
      
      Furthermore i've added various boot-time options to override/tune
      migration behavior.
      
      Firstly, there's a blanket override for autodetection:
      
      	migration_cost=1000,2000,3000
      
      will override the depth 0/1/2 values with 1msec/2msec/3msec values.
      
      Secondly, there's a global factor that can be used to increase (or
      decrease) the autodetected values:
      
      	migration_factor=120
      
      will increase the autodetected values by 20%. This option is useful to
      tune things in a workload-dependent way - e.g. if a workload is
      cache-insensitive then CPU utilization can be maximized by specifying
      migration_factor=0.
      
      I've tested the autodetection code quite extensively on x86, on 3
      P3/Xeon/2MB, and the autodetected values look pretty good:
      
      Dual Celeron (128K L2 cache):
      
       ---------------------
       migration cost matrix (max_cache_size: 131072, cpu: 467 MHz):
       ---------------------
                 [00]    [01]
       [00]:     -     1.7(1)
       [01]:   1.7(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (0) 1.7 (1784008)
       ---------------------
      
      Here the slow memory subsystem dominates system performance, and even
      though caches are small, the migration cost is 1.7 msecs.
      
      Dual HT P4 (512K L2 cache):
      
       ---------------------
       migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz):
       ---------------------
                 [00]    [01]    [02]    [03]
       [00]:     -     0.4(1)  0.0(0)  0.4(1)
       [01]:   0.4(1)    -     0.4(1)  0.0(0)
       [02]:   0.0(0)  0.4(1)    -     0.4(1)
       [03]:   0.4(1)  0.0(0)  0.4(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (33900) 0.4 (448514)
       ---------------------
      
      Here it can be seen that there is no migration cost between two HT
      siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory
      system makes inter-physical-CPU migration pretty cheap: 0.4 msecs.
      
      8-way P3/Xeon [2MB L2 cache]:
      
       ---------------------
       migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz):
       ---------------------
                 [00]    [01]    [02]    [03]    [04]    [05]    [06]    [07]
       [00]:     -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [01]:  19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [02]:  19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [03]:  19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [04]:  19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1)
       [05]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1)
       [06]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1)
       [07]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (0) 19.2 (19281756)
       ---------------------
      
      This one has huge caches and a relatively slow memory subsystem - so the
      migration cost is 19 msecs.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAshok Raj <ashok.raj@intel.com>
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Cc: <wilder@us.ibm.com>
      Signed-off-by: NJohn Hawkes <hawkes@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      198e2f18
    • I
      [PATCH] sched: add cacheflush() asm · 4dc7a0bb
      Ingo Molnar 提交于
      Add per-arch sched_cacheflush() which is a write-back cacheflush used by
      the migration-cost calibration code at bootup time.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4dc7a0bb
  6. 11 1月, 2006 5 次提交
  7. 10 1月, 2006 2 次提交
  8. 09 1月, 2006 5 次提交
    • B
      [PATCH] /dev/mem: validate mmap requests · 80851ef2
      Bjorn Helgaas 提交于
      Add a hook so architectures can validate /dev/mem mmap requests.
      
      This is analogous to validation we already perform in the read/write
      paths.
      
      The identity mapping scheme used on ia64 requires that each 16MB or
      64MB granule be accessed with exactly one attribute (write-back or
      uncacheable).  This avoids "attribute aliasing", which can cause a
      machine check.
      
      Sample problem scenario:
        - Machine supports VGA, so it has uncacheable (UC) MMIO at 640K-768K
        - efi_memmap_init() discards any write-back (WB) memory in the first granule
        - Application (e.g., "hwinfo") mmaps /dev/mem, offset 0
        - hwinfo receives UC mapping (the default, since memmap says "no WB here")
        - Machine check abort (on chipsets that don't support UC access to WB
          memory, e.g., sx1000)
      
      In the scenario above, the only choices are
        - Use WB for hwinfo mmap.  Can't do this because it causes attribute
          aliasing with the UC mapping for the VGA MMIO space.
        - Use UC for hwinfo mmap.  Can't do this because the chipset may not
          support UC for that region.
        - Disallow the hwinfo mmap with -EINVAL.  That's what this patch does.
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      80851ef2
    • A
      [PATCH] remove gcc-2 checks · a1365647
      Andrew Morton 提交于
      Remove various things which were checking for gcc-1.x and gcc-2.x compilers.
      
      From: Adrian Bunk <bunk@stusta.de>
      
          Some documentation updates and removes some code paths for gcc < 3.2.
      Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a1365647
    • J
      [PATCH] consolidate asm/futex.h · f8aaeace
      Jeff Dike 提交于
      Most of the architectures have the same asm/futex.h.  This consolidates them
      into asm-generic, with the arches including it from their own asm/futex.h.
      
      In the case of UML, this reverts the old broken futex.h and goes back to using
      the same one as almost everyone else.
      Signed-off-by: NJeff Dike <jdike@addtoit.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f8aaeace
    • R
      [PATCH] Kill L1_CACHE_SHIFT_MAX · 1fd73c6b
      Ravikiran G Thirumalai 提交于
      Kill L1_CACHE_SHIFT from all arches.  Since L1_CACHE_SHIFT_MAX is not used
      anymore with the introduction of INTERNODE_CACHE, kill L1_CACHE_SHIFT_MAX.
      Signed-off-by: NRavikiran Thirumalai <kiran@scalex86.org>
      Signed-off-by: NShai Fultheim <shai@scalex86.org>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1fd73c6b
    • C
      [PATCH] Swap Migration V5: sys_migrate_pages interface · 39743889
      Christoph Lameter 提交于
      sys_migrate_pages implementation using swap based page migration
      
      This is the original API proposed by Ray Bryant in his posts during the first
      half of 2005 on linux-mm@kvack.org and linux-kernel@vger.kernel.org.
      
      The intent of sys_migrate is to migrate memory of a process.  A process may
      have migrated to another node.  Memory was allocated optimally for the prior
      context.  sys_migrate_pages allows to shift the memory to the new node.
      
      sys_migrate_pages is also useful if the processes available memory nodes have
      changed through cpuset operations to manually move the processes memory.  Paul
      Jackson is working on an automated mechanism that will allow an automatic
      migration if the cpuset of a process is changed.  However, a user may decide
      to manually control the migration.
      
      This implementation is put into the policy layer since it uses concepts and
      functions that are also needed for mbind and friends.  The patch also provides
      a do_migrate_pages function that may be useful for cpusets to automatically
      move memory.  sys_migrate_pages does not modify policies in contrast to Ray's
      implementation.
      
      The current code here is based on the swap based page migration capability and
      thus is not able to preserve the physical layout relative to it containing
      nodeset (which may be a cpuset).  When direct page migration becomes available
      then the implementation needs to be changed to do a isomorphic move of pages
      between different nodesets.  The current implementation simply evicts all
      pages in source nodeset that are not in the target nodeset.
      
      Patch supports ia64, i386 and x86_64.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      39743889
  9. 07 1月, 2006 3 次提交
    • C
      [PATCH] atomic_long_t & include/asm-generic/atomic.h V2 · d3cb4871
      Christoph Lameter 提交于
      Several counters already have the need to use 64 atomic variables on 64 bit
      platforms (see mm_counter_t in sched.h).  We have to do ugly ifdefs to fall
      back to 32 bit atomic on 32 bit platforms.
      
      The VM statistics patch that I am working on will also make more extensive
      use of atomic64.
      
      This patch introduces a new type atomic_long_t by providing definitions in
      asm-generic/atomic.h that works similar to the c "long" type.  Its 32 bits
      on 32 bit platforms and 64 bits on 64 bit platforms.
      
      Also cleans up the determination of the mm_counter_t in sched.h.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d3cb4871
    • A
      [PATCH] kill last zone_reclaim() bits · 7756b9e4
      Andrew Morton 提交于
      Remove the last bits of Martin's ill-fated sys_set_zone_reclaim().
      
      Cc: Martin Hicks <mort@wildopensource.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7756b9e4
    • B
      [PATCH] madvise(MADV_REMOVE): remove pages from tmpfs shm backing store · f6b3ec23
      Badari Pulavarty 提交于
      Here is the patch to implement madvise(MADV_REMOVE) - which frees up a
      given range of pages & its associated backing store.  Current
      implementation supports only shmfs/tmpfs and other filesystems return
      -ENOSYS.
      
      "Some app allocates large tmpfs files, then when some task quits and some
      client disconnect, some memory can be released.  However the only way to
      release tmpfs-swap is to MADV_REMOVE". - Andrea Arcangeli
      
      Databases want to use this feature to drop a section of their bufferpool
      (shared memory segments) - without writing back to disk/swap space.
      
      This feature is also useful for supporting hot-plug memory on UML.
      
      Concerns raised by Andrew Morton:
      
      - "We have no plan for holepunching!  If we _do_ have such a plan (or
        might in the future) then what would the API look like?  I think
        sys_holepunch(fd, start, len), so we should start out with that."
      
      - Using madvise is very weird, because people will ask "why do I need to
        mmap my file before I can stick a hole in it?"
      
      - None of the other madvise operations call into the filesystem in this
        manner.  A broad question is: is this capability an MM operation or a
        filesytem operation?  truncate, for example, is a filesystem operation
        which sometimes has MM side-effects.  madvise is an mm operation and with
        this patch, it gains FS side-effects, only they're really, really
        significant ones."
      
      Comments:
      
      - Andrea suggested the fs operation too but then it's more efficient to
        have it as a mm operation with fs side effects, because they don't
        immediatly know fd and physical offset of the range.  It's possible to
        fixup in userland and to use the fs operation but it's more expensive,
        the vmas are already in the kernel and we can use them.
      
      Short term plan &  Future Direction:
      
      - We seem to need this interface only for shmfs/tmpfs files in the short
        term.  We have to add hooks into the filesystem for correctness and
        completeness.  This is what this patch does.
      
      - In the future, plan is to support both fs and mmap apis also.  This
        also involves (other) filesystem specific functions to be implemented.
      
      - Current patch doesn't support VM_NONLINEAR - which can be addressed in
        the future.
      Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Andrea Arcangeli <andrea@suse.de>
      Cc: Michael Kerrisk <mtk-manpages@gmx.net>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f6b3ec23
  10. 04 1月, 2006 1 次提交
  11. 25 12月, 2005 1 次提交
  12. 17 12月, 2005 1 次提交
    • J
      [IA64] disable preemption in udelay() · f5899b5d
      John Hawkes 提交于
      The udelay() inline for ia64 uses the ITC.  If CONFIG_PREEMPT is enabled
      and the platform has unsynchronized ITCs and the calling task migrates
      to another CPU while doing the udelay loop, then the effective delay may
      be too short or very, very long.
      
      This patch disables preemption around 100 usec chunks of the overall
      desired udelay time.  This minimizes preemption-holdoffs.
      
      udelay() is now too big to be inline, move it out of line and export it.
      Signed-off-by: NJohn Hawkes <hawkes@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      f5899b5d
  13. 14 12月, 2005 1 次提交
    • T
      [IA64] Split 16-bit severity field in sal_log_record_header · eed66cfc
      Tony Luck 提交于
      ERR_SEVERITY item is defined as a 8 bits item in SAL documentation
      ($B.2.1 rev december 2003), but as an u16 in sal.h.
      This has the side effect that current code in mca.c may not call
      ia64_sal_clear_state_info() upon receiving corrected platform errors
      if there are bits set in the validation byte.  Reported by Xavier Bru.
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      eed66cfc
  14. 13 12月, 2005 1 次提交
  15. 08 12月, 2005 1 次提交
  16. 07 12月, 2005 1 次提交
    • R
      [IA64] Change SET_PERSONALITY to comply with comment in binfmt_elf.c. · bd1d6e24
      Robin Holt 提交于
      We have a customer application which trips a bug.  The problem arises
      when a driver attempts to call do_munmap on an area which is mapped, but
      because current->thread.task_size has been set to 0xC0000000, the call
      to do_munmap fails thinking it is an unmap beyond the user's address
      space.
      
      The comment in fs/binfmt_elf.c in load_elf_library() before the call
      to SET_PERSONALITY() indicates that task_size must not be changed for
      the running application until flush_thread, but is for ia64 executing
      ia32 binaries.
      
      This patch moves the setting of task_size from SET_PERSONALITY() to
      flush_thread() as indicated.  The customer application no longer is able
      to trip the bug.
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      bd1d6e24