1. 24 3月, 2006 40 次提交
    • R
      [PATCH] early_printk: cleanup trailiing whitespace · a94ddf3a
      Randy Dunlap 提交于
      Remove all trailing tabs and spaces.  No other changes.
      Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a94ddf3a
    • A
      [PATCH] fadvise(): write commands · ebcf28e1
      Andrew Morton 提交于
      Add two new linux-specific fadvise extensions():
      
      LINUX_FADV_ASYNC_WRITE: start async writeout of any dirty pages between file
      offsets `offset' and `offset+len'.  Any pages which are currently under
      writeout are skipped, whether or not they are dirty.
      
      LINUX_FADV_WRITE_WAIT: wait upon writeout of any dirty pages between file
      offsets `offset' and `offset+len'.
      
      By combining these two operations the application may do several things:
      
      LINUX_FADV_ASYNC_WRITE: push some or all of the dirty pages at the disk.
      
      LINUX_FADV_WRITE_WAIT, LINUX_FADV_ASYNC_WRITE: push all of the currently dirty
      pages at the disk.
      
      LINUX_FADV_WRITE_WAIT, LINUX_FADV_ASYNC_WRITE, LINUX_FADV_WRITE_WAIT: push all
      of the currently dirty pages at the disk, wait until they have been written.
      
      It should be noted that none of these operations write out the file's
      metadata.  So unless the application is strictly performing overwrites of
      already-instantiated disk blocks, there are no guarantees here that the data
      will be available after a crash.
      
      To complete this suite of operations I guess we should have a "sync file
      metadata only" operation.  This gives applications access to all the building
      blocks needed for all sorts of sync operations.  But sync-metadata doesn't fit
      well with the fadvise() interface.  Probably it should be a new syscall:
      sys_fmetadatasync().
      
      The patch also diddles with the meaning of `endbyte' in sys_fadvise64_64().
      It is made to represent that last affected byte in the file (ie: it is
      inclusive).  Generally, all these byterange and pagerange functions are
      inclusive so we can easily represent EOF with -1.
      
      As Ulrich notes, these two functions are somewhat abusive of the fadvise()
      concept, which appears to be "set the future policy for this fd".
      
      But these commands are a perfect fit with the fadvise() impementation, and
      several of the existing fadvise() commands are synchronous and don't affect
      future policy either.   I think we can live with the slight incongruity.
      
      Cc: Michael Kerrisk <mtk-manpages@gmx.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ebcf28e1
    • A
      [PATCH] filemap_fdatawrite_range() api: clarify -end parameter · 469eb4d0
      Andrew Morton 提交于
      I had trouble understanding working out whether filemap_fdatawrite_range()'s
      `end' parameter describes the last-byte-to-be-written or the last-plus-one.
      Clarify that in comments.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      469eb4d0
    • J
      [PATCH] CONFIG_UNWIND_INFO · 604bf5a2
      Jan Beulich 提交于
      As a foundation for reliable stack unwinding, this adds a config option
      (available to all architectures except IA64 and those where the module
      loader might have problems with the resulting relocations) to enable the
      generation of frame unwind information.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Paul Mundt <lethal@linux-sh.org>,
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      604bf5a2
    • J
      [PATCH] abstract type/size specification for assembly · ab7efcc9
      Jan Beulich 提交于
      Provide abstraction for generating type and size information of assembly
      routines and data, while permitting architectures to override these
      defaults.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: "Russell King" <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: "Andi Kleen" <ak@suse.de>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ab7efcc9
    • A
      [PATCH] fast ext3_statfs · 09fe316a
      Alex Tomas 提交于
      Under I/O load it may take up to a dozen seconds to read all group
      descriptors.  This is what ext3_statfs() does.  At the same time, we already
      maintain global numbers of free inodes/blocks.  Why don't we use them instead
      of group reading and summing?
      
      Cc: Ravikiran G Thirumalai <kiran@scalex86.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      09fe316a
    • O
      [PATCH] remove ipmi pm_power_off redefinition · e933b6d6
      Olaf Hering 提交于
      Use the global define of pm_power_off
      Signed-off-by: NOlaf Hering <olh@suse.de>
      Cc: Corey Minyard <minyard@acm.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e933b6d6
    • P
      [PATCH] isofs: remove unused debugging macros · 5b3cf3e0
      Pekka Enberg 提交于
      Remove unused debugging macros from isofs.  The referred debug functions do
      not exist in the kernel.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5b3cf3e0
    • A
      [PATCH] s/;;/;/g · 53b3531b
      Alexey Dobriyan 提交于
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      53b3531b
    • P
      [PATCH] cpuset: remove useless local variable initialization · 29afd49b
      Paul Jackson 提交于
      Remove a useless variable initialization in cpuset __cpuset_zone_allowed().
       The local variable 'allowed' is unconditionally set before use, later on
      in the code, so does not need to be initialized.
      
      Not that it seems to matter to the code generated any, as the compiler
      optimizes out the superfluous assignment anyway.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      29afd49b
    • P
      [PATCH] cpuset: memory_spread_slab drop useless PF_SPREAD_PAGE check · b2455396
      Paul Jackson 提交于
      The hook in the slab cache allocation path to handle cpuset memory
      spreading for tasks in cpusets with 'memory_spread_slab' enabled has a
      modest performance bug.  The hook calls into the memory spreading handler
      alternate_node_alloc() if either of 'memory_spread_slab' or
      'memory_spread_page' is enabled, even though the handler does nothing
      (albeit harmlessly) for the page case
      
      Fix - drop PF_SPREAD_PAGE from the set of flag bits that are used to
      trigger a call to alternate_node_alloc().
      
      The page case is handled by separate hooks -- see the calls conditioned on
      cpuset_do_page_mem_spread() in mm/filemap.c
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b2455396
    • P
      [PATCH] cpuset: don't need to mark cpuset_mems_generation atomic · 151a4420
      Paul Jackson 提交于
      Drop the atomic_t marking on the cpuset static global
      cpuset_mems_generation.  Since all access to it is guarded by the global
      manage_mutex, there is no need for further serialization of this value.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      151a4420
    • P
      [PATCH] cpuset: remove unnecessary NULL check · 8488bc35
      Paul Jackson 提交于
      Remove a no longer needed test for NULL cpuset pointer, with a little
      comment explaining why the test isn't needed.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8488bc35
    • P
      [PATCH] cpuset memory spread slab cache hooks · b0196009
      Paul Jackson 提交于
      Change the kmem_cache_create calls for certain slab caches to support cpuset
      memory spreading.
      
      See the previous patches, cpuset_mem_spread, for an explanation of cpuset
      memory spreading, and cpuset_mem_spread_slab_cache for the slab cache support
      for memory spreading.
      
      The slab caches marked for now are: dentry_cache, inode_cache, some xfs slab
      caches, and buffer_head.  This list may change over time.  In particular,
      other file system types that are used extensively on large NUMA systems may
      want to allow for spreading their directory and inode slab cache entries.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b0196009
    • P
      [PATCH] cpuset memory spread slab cache optimizations · c61afb18
      Paul Jackson 提交于
      The hooks in the slab cache allocator code path for support of NUMA
      mempolicies and cpuset memory spreading are in an important code path.  Many
      systems will use neither feature.
      
      This patch optimizes those hooks down to a single check of some bits in the
      current tasks task_struct flags.  For non NUMA systems, this hook and related
      code is already ifdef'd out.
      
      The optimization is done by using another task flag, set if the task is using
      a non-default NUMA mempolicy.  Taking this flag bit along with the
      PF_SPREAD_PAGE and PF_SPREAD_SLAB flag bits added earlier in this 'cpuset
      memory spreading' patch set, one can check for the combination of any of these
      special case memory placement mechanisms with a single test of the current
      tasks task_struct flags.
      
      This patch also tightens up the code, to save a few bytes of kernel text
      space, and moves some of it out of line.  Due to the nested inlines called
      from multiple places, we were ending up with three copies of this code, which
      once we get off the main code path (for local node allocation) seems a bit
      wasteful of instruction memory.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c61afb18
    • P
      [PATCH] cpuset memory spread slab cache implementation · 101a5001
      Paul Jackson 提交于
      Provide the slab cache infrastructure to support cpuset memory spreading.
      
      See the previous patches, cpuset_mem_spread, for an explanation of cpuset
      memory spreading.
      
      This patch provides a slab cache SLAB_MEM_SPREAD flag.  If set in the
      kmem_cache_create() call defining a slab cache, then any task marked with the
      process state flag PF_MEMSPREAD will spread memory page allocations for that
      cache over all the allowed nodes, instead of preferring the local (faulting)
      node.
      
      On systems not configured with CONFIG_NUMA, this results in no change to the
      page allocation code path for slab caches.
      
      On systems with cpusets configured in the kernel, but the "memory_spread"
      cpuset option not enabled for the current tasks cpuset, this adds a call to a
      cpuset routine and failed bit test of the processor state flag PF_SPREAD_SLAB.
      
      For tasks so marked, a second inline test is done for the slab cache flag
      SLAB_MEM_SPREAD, and if that is set and if the allocation is not
      in_interrupt(), this adds a call to to a cpuset routine that computes which of
      the tasks mems_allowed nodes should be preferred for this allocation.
      
      ==> This patch adds another hook into the performance critical
          code path to allocating objects from the slab cache, in the
          ____cache_alloc() chunk, below.  The next patch optimizes this
          hook, reducing the impact of the combined mempolicy plus memory
          spreading hooks on this critical code path to a single check
          against the tasks task_struct flags word.
      
      This patch provides the generic slab flags and logic needed to apply memory
      spreading to a particular slab.
      
      A subsequent patch will mark a few specific slab caches for this placement
      policy.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      101a5001
    • P
      [PATCH] cpuset memory spread: slab cache format · fffb60f9
      Paul Jackson 提交于
      Rewrap the overly long source code lines resulting from the previous
      patch's addition of the slab cache flag SLAB_MEM_SPREAD.  This patch
      contains only formatting changes, and no function change.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fffb60f9
    • P
      [PATCH] cpuset memory spread: slab cache filesystems · 4b6a9316
      Paul Jackson 提交于
      Mark file system inode and similar slab caches subject to SLAB_MEM_SPREAD
      memory spreading.
      
      If a slab cache is marked SLAB_MEM_SPREAD, then anytime that a task that's
      in a cpuset with the 'memory_spread_slab' option enabled goes to allocate
      from such a slab cache, the allocations are spread evenly over all the
      memory nodes (task->mems_allowed) allowed to that task, instead of favoring
      allocation on the node local to the current cpu.
      
      The following inode and similar caches are marked SLAB_MEM_SPREAD:
      
          file                               cache
          ====                               =====
          fs/adfs/super.c                    adfs_inode_cache
          fs/affs/super.c                    affs_inode_cache
          fs/befs/linuxvfs.c                 befs_inode_cache
          fs/bfs/inode.c                     bfs_inode_cache
          fs/block_dev.c                     bdev_cache
          fs/cifs/cifsfs.c                   cifs_inode_cache
          fs/coda/inode.c                    coda_inode_cache
          fs/dquot.c                         dquot
          fs/efs/super.c                     efs_inode_cache
          fs/ext2/super.c                    ext2_inode_cache
          fs/ext2/xattr.c (fs/mbcache.c)     ext2_xattr
          fs/ext3/super.c                    ext3_inode_cache
          fs/ext3/xattr.c (fs/mbcache.c)     ext3_xattr
          fs/fat/cache.c                     fat_cache
          fs/fat/inode.c                     fat_inode_cache
          fs/freevxfs/vxfs_super.c           vxfs_inode
          fs/hpfs/super.c                    hpfs_inode_cache
          fs/isofs/inode.c                   isofs_inode_cache
          fs/jffs/inode-v23.c                jffs_fm
          fs/jffs2/super.c                   jffs2_i
          fs/jfs/super.c                     jfs_ip
          fs/minix/inode.c                   minix_inode_cache
          fs/ncpfs/inode.c                   ncp_inode_cache
          fs/nfs/direct.c                    nfs_direct_cache
          fs/nfs/inode.c                     nfs_inode_cache
          fs/ntfs/super.c                    ntfs_big_inode_cache_name
          fs/ntfs/super.c                    ntfs_inode_cache
          fs/ocfs2/dlm/dlmfs.c               dlmfs_inode_cache
          fs/ocfs2/super.c                   ocfs2_inode_cache
          fs/proc/inode.c                    proc_inode_cache
          fs/qnx4/inode.c                    qnx4_inode_cache
          fs/reiserfs/super.c                reiser_inode_cache
          fs/romfs/inode.c                   romfs_inode_cache
          fs/smbfs/inode.c                   smb_inode_cache
          fs/sysv/inode.c                    sysv_inode_cache
          fs/udf/super.c                     udf_inode_cache
          fs/ufs/super.c                     ufs_inode_cache
          net/socket.c                       sock_inode_cache
          net/sunrpc/rpc_pipe.c              rpc_inode_cache
      
      The choice of which slab caches to so mark was quite simple.  I marked
      those already marked SLAB_RECLAIM_ACCOUNT, except for fs/xfs, dentry_cache,
      inode_cache, and buffer_head, which were marked in a previous patch.  Even
      though SLAB_RECLAIM_ACCOUNT is for a different purpose, it marks the same
      potentially large file system i/o related slab caches as we need for memory
      spreading.
      
      Given that the rule now becomes "wherever you would have used a
      SLAB_RECLAIM_ACCOUNT slab cache flag before (usually the inode cache), use
      the SLAB_MEM_SPREAD flag too", this should be easy enough to maintain.
      Future file system writers will just copy one of the existing file system
      slab cache setups and tend to get it right without thinking.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4b6a9316
    • P
      [PATCH] cpuset memory spread page cache implementation and hooks · 44110fe3
      Paul Jackson 提交于
      Change the page cache allocation calls to support cpuset memory spreading.
      
      See the previous patch, cpuset_mem_spread, for an explanation of cpuset memory
      spreading.
      
      On systems without cpusets configured in the kernel, this is no change.
      
      On systems with cpusets configured in the kernel, but the "memory_spread"
      cpuset option not enabled for the current tasks cpuset, this adds a call to a
      cpuset routine and failed bit test of the processor state flag PF_SPREAD_PAGE.
      
      On tasks in cpusets with "memory_spread" enabled, this adds a call to a cpuset
      routine that computes which of the tasks mems_allowed nodes should be
      preferred for this allocation.
      
      If memory spreading applies to a particular allocation, then any other NUMA
      mempolicy does not apply.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      44110fe3
    • P
      [PATCH] cpuset memory spread basic implementation · 825a46af
      Paul Jackson 提交于
      This patch provides the implementation and cpuset interface for an alternative
      memory allocation policy that can be applied to certain kinds of memory
      allocations, such as the page cache (file system buffers) and some slab caches
      (such as inode caches).
      
      The policy is called "memory spreading." If enabled, it spreads out these
      kinds of memory allocations over all the nodes allowed to a task, instead of
      preferring to place them on the node where the task is executing.
      
      All other kinds of allocations, including anonymous pages for a tasks stack
      and data regions, are not affected by this policy choice, and continue to be
      allocated preferring the node local to execution, as modified by the NUMA
      mempolicy.
      
      There are two boolean flag files per cpuset that control where the kernel
      allocates pages for the file system buffers and related in kernel data
      structures.  They are called 'memory_spread_page' and 'memory_spread_slab'.
      
      If the per-cpuset boolean flag file 'memory_spread_page' is set, then the
      kernel will spread the file system buffers (page cache) evenly over all the
      nodes that the faulting task is allowed to use, instead of preferring to put
      those pages on the node where the task is running.
      
      If the per-cpuset boolean flag file 'memory_spread_slab' is set, then the
      kernel will spread some file system related slab caches, such as for inodes
      and dentries evenly over all the nodes that the faulting task is allowed to
      use, instead of preferring to put those pages on the node where the task is
      running.
      
      The implementation is simple.  Setting the cpuset flags 'memory_spread_page'
      or 'memory_spread_cache' turns on the per-process flags PF_SPREAD_PAGE or
      PF_SPREAD_SLAB, respectively, for each task that is in the cpuset or
      subsequently joins that cpuset.  In subsequent patches, the page allocation
      calls for the affected page cache and slab caches are modified to perform an
      inline check for these flags, and if set, a call to a new routine
      cpuset_mem_spread_node() returns the node to prefer for the allocation.
      
      The cpuset_mem_spread_node() routine is also simple.  It uses the value of a
      per-task rotor cpuset_mem_spread_rotor to select the next node in the current
      tasks mems_allowed to prefer for the allocation.
      
      This policy can provide substantial improvements for jobs that need to place
      thread local data on the corresponding node, but that need to access large
      file system data sets that need to be spread across the several nodes in the
      jobs cpuset in order to fit.  Without this patch, especially for jobs that
      might have one thread reading in the data set, the memory allocation across
      the nodes in the jobs cpuset can become very uneven.
      
      A couple of Copyright year ranges are updated as well.  And a couple of email
      addresses that can be found in the MAINTAINERS file are removed.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      825a46af
    • P
      [PATCH] cpuset use combined atomic_inc_return calls · 8a39cc60
      Paul Jackson 提交于
      Replace pairs of calls to <atomic_inc, atomic_read>, with a single call
      atomic_inc_return, saving a few bytes of source and kernel text.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8a39cc60
    • P
      [PATCH] cpuset cleanup not not operators · 7b5b9ef0
      Paul Jackson 提交于
      Since the test_bit() bit operator is boolean (return 0 or 1), the double not
      "!!" operations needed to convert a scalar (zero or not zero) to a boolean are
      not needed.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7b5b9ef0
    • C
      [PATCH] cpusets: only wakeup kswapd for zones in the current cpuset · 0b1303fc
      Christoph Lameter 提交于
      If we get under some memory pressure in a cpuset (we only scan zones that
      are in the cpuset for memory) then kswapd is woken up for all zones.  This
      patch only wakes up kswapd in zones that are part of the current cpuset.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Acked-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0b1303fc
    • P
      [PATCH] rcutorture: tag success/failure line with module parameters · 95c38322
      Paul E. McKenney 提交于
      A long-running rcutorture test can overflow dmesg, so that the line
      containing the module parameters is lost.  Although it is usually possible
      to retrieve this information from the log files, it is much better to just
      tag it onto the final success/failure line so that it may be easily found.
      This patch does just that.
      Signed-off-by: N"Paul E. McKenney" <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      95c38322
    • A
      [PATCH] kill include/linux/platform.h, default_idle() cleanup · cdb04527
      Adrian Bunk 提交于
      include/linux/platform.h contained nothing that was actually used except
      the default_idle() prototype, and is therefore removed by this patch.
      
      This patch does the following with the platform specific default_idle()
      functions on different architectures:
      - remove the unused function:
        - parisc
        - sparc64
      - make the needlessly global function static:
        - arm
        - h8300
        - m68k
        - m68knommu
        - s390
        - v850
        - x86_64
      - add a prototype in asm/system.h:
        - cris
        - i386
        - ia64
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Acked-by: NPatrick Mochel <mochel@digitalimplant.org>
      Acked-by: NKyle McMartin <kyle@parisc-linux.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      cdb04527
    • A
      008accbb
    • A
    • A
      [PATCH] extract-ikconfig: use mktemp(1) · 66f9f59a
      Alexey Dobriyan 提交于
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      66f9f59a
    • J
      [PATCH] tvec_bases too large for per-cpu data · a4a6198b
      Jan Beulich 提交于
      With internal Xen-enabled kernels we see the kernel's static per-cpu data
      area exceed the limit of 32k on x86-64, and even native x86-64 kernels get
      fairly close to that limit.  I generally question whether it is reasonable
      to have data structures several kb in size allocated as per-cpu data when
      the space there is rather limited.
      
      The biggest arch-independent consumer is tvec_bases (over 4k on 32-bit
      archs, over 8k on 64-bit ones), which now gets converted to use dynamically
      allocated memory instead.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a4a6198b
    • A
      [PATCH] fs/coda/: proper prototypes · c98d8cfb
      Adrian Bunk 提交于
      Introduce a file fs/coda/coda_int.h with proper prototypes for some code.
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Acked-by: NJan Harkes <jaharkes@cs.cmu.edu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c98d8cfb
    • A
      [PATCH] fs/ext2/: proper ext2_get_parent() prototype · 2c221290
      Adrian Bunk 提交于
      Add a proper prototype for ext2_get_parent().
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2c221290
    • A
      [PATCH] fs/9p/: possible cleanups · 29c6e486
      Adrian Bunk 提交于
      - mux.c: v9fs_poll_mux() was inline but not static resuling in needless
               object size bloat
      - mux.c: remove all "inline"s: gcc should know best what to inline
      - #if 0 the following unused global functions:
        - 9p.c: v9fs_v9fs_t_flush()
        - conv.c: v9fs_create_tauth()
        - mux.c: v9fs_mux_rpcnb()
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Cc: Eric Van Hensbergen <ericvh@ericvh.myip.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      29c6e486
    • O
      [PATCH] rcu_process_callbacks: don't cli() while testing ->nxtlist · caa9ee77
      Oleg Nesterov 提交于
      __rcu_process_callbacks() disables interrupts to protect itself from
      call_rcu() which adds new entries to ->nxtlist.
      
      However we can check "->nxtlist != NULL" with interrupts enabled, we can't
      get "false positives" because call_rcu() can only change this condition
      from 0 to 1.
      
      Tested with rcutorture.ko.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: NDipankar Sarma <dipankar@in.ibm.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      caa9ee77
    • B
      [PATCH] Range checking in do_proc_dointvec_(userhz_)jiffies_conv · cba9f33d
      Bart Samwel 提交于
      When (integer) sysctl values are in either seconds or centiseconds, but
      represented internally as jiffies, the allowable value range is decreased.
      This patch adds range checks to the conversion routines.
      
      For values in seconds: maximum LONG_MAX / HZ.
      
      For values in centiseconds: maximum (LONG_MAX / HZ) * USER_HZ.
      
      (BTW, does anyone else feel that an interface in seconds should not be
      accepting negative values?)
      Signed-off-by: NBart Samwel <bart@samwel.tk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      cba9f33d
    • B
      [PATCH] Represent laptop_mode as jiffies internally · ed5b43f1
      Bart Samwel 提交于
      Make that the internal value for /proc/sys/vm/laptop_mode is stored as
      jiffies instead of seconds.  Let the sysctl interface do the conversions,
      instead of doing on-the-fly conversions every time the value is used.
      
      Add a description of the fact that laptop_mode doubles as a flag and a
      timeout to the comment above the laptop_mode variable.
      Signed-off-by: NBart Samwel <bart@samwel.tk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ed5b43f1
    • B
      [PATCH] Represent dirty_*_centisecs as jiffies internally · f6ef9438
      Bart Samwel 提交于
      Make that the internal values for:
      
      /proc/sys/vm/dirty_writeback_centisecs
      /proc/sys/vm/dirty_expire_centisecs
      
      are stored as jiffies instead of centiseconds.  Let the sysctl interface do
      the conversions with full precision using clock_t_to_jiffies, instead of
      doing overflow-sensitive on-the-fly conversions every time the values are
      used.
      
      Cons: apparent precision loss if HZ is not a multiple of 100, because of
      conversion back and forth.  This is a common problem for all sysctl values
      that use proc_dointvec_userhz_jiffies.  (There is only one other in-tree
      use, in net/core/neighbour.c.)
      Signed-off-by: NBart Samwel <bart@samwel.tk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f6ef9438
    • A
      [PATCH] free_uid() locking improvement · 36f57413
      Andrew Morton 提交于
      Reduce lock hold times in free_uid().
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      36f57413
    • P
      [PATCH] bitmap: region restructuring · 3cf64b93
      Paul Jackson 提交于
      Restructure the bitmap_*_region() operations, to avoid code duplication.
      
      Also reduces binary text size by about 100 bytes (ia64 arch).  The original
      Bottomley bitmap_*_region patch added about 1000 bytes of compiled kernel text
      (ia64).  The Mundt multiword extension added another 600 bytes, and this
      restructuring patch gets back about 100 bytes.
      
      But the real motivation was the reduced amount of duplicated code.
      
      Tested by Paul Mundt using <= BITS_PER_LONG as well as power of
      2 aligned multiword spanning allocations.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3cf64b93
    • P
      [PATCH] bitmap: region multiword spanning support · 74373c6a
      Paul Mundt 提交于
      Add support to the lib/bitmap.c bitmap_*_region() routines
      
      For bitmap regions larger than one word (nbits > BITS_PER_LONG).  This removes
      a BUG_ON() in lib bitmap.
      
      I have an updated store queue API for SH that is currently using this with
      relative success, and at first glance, it seems like this could be useful for
      x86 (arch/i386/kernel/pci-dma.c) as well.  Particularly for anything using
      dma_declare_coherent_memory() on large areas and that attempts to allocate
      large buffers from that space.
      
      Paul Jackson also did some cleanup to this patch.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      74373c6a
    • P
      [PATCH] bitmap: region cleanup · 87e24802
      Paul Jackson 提交于
      Paul Mundt <lethal@linux-sh.org> says:
      
      This patch set implements a number of patches to clean up and restructure the
      bitmap region code, in addition to extending the interface to support
      multiword spanning allocations.
      
      The current implementation (before this patch set) is limited by only being
      able to allocate pages <= BITS_PER_LONG, as noted by the strategically
      positioned BUG_ON() at lib/bitmap.c:752:
      
              /* We don't do regions of pages > BITS_PER_LONG.  The
      	 * algorithm would be a simple look for multiple zeros in the
      	 * array, but there's no driver today that needs this.  If you
      	 * trip this BUG(), you get to code it... */
              BUG_ON(pages > BITS_PER_LONG);
      
      As I seem to have been the first person to trigger this, the result ends up
      being the following patch set with the help of Paul Jackson.
      
      The final patch in the series eliminates quite a bit of code duplication, so
      the bitmap code size ends up being smaller than the current implementation as
      an added bonus.
      
      After these are applied, it should already be possible to do multiword
      allocations with dma_alloc_coherent() out of ranges established by
      dma_declare_coherent_memory() on x86 without having to change any of the code,
      and the SH store queue API will follow up on this as the other user that needs
      support for this.
      
      This patch:
      
      Some code cleanup on the lib/bitmap.c bitmap_*_region() routines:
      
       * spacing
       * variable names
       * comments
      
      Has no change to code function.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      87e24802