1. 24 6月, 2005 40 次提交
    • M
      [PATCH] I2O: bugfixes and compability enhancements · 61fbfa81
      Markus Lidel 提交于
      Changes:
      
       - Fixed sysfs bug where user and parent links where added to the I2O
         device itself
       - Fixed bug when calculating TID for the event handler and cleaned up the
         workflow of i2o_driver_dispatch()
       - Fixed oops when no I2O device could be found for an event delivered to
         Exec-OSM
       - Fixed initialization of spinlock in Exec-OSM
       - Fixed memory leak in i2o_cfg_passthru() and i2o_cfg_passthru()
       - Removed MTRR support
       - Added PCI ID of Promise SX6000 with firmware >= 1.20.x.x
       - Turn of caching for ioremapped memory of in_queue
       - Added initialization sequence for Promise controllers
       - Moved definition of u8 / u16 / u32 for raidutils before first use
      Signed-off-by: NMarkus Lidel <Markus.Lidel@shadowconnect.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      61fbfa81
    • K
      [PATCH] tpm: TPMs on additional LPC bus · a6df7da8
      Kylene Hall 提交于
      Add support for TPMs on additional LPC buses.
      Signed-off-by: NKylene Hall <kjhall@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a6df7da8
    • C
      [PATCH] ipmi: add power cycle capability · 3b625943
      Corey Minyard 提交于
      This patch to adds "power cycle" functionality to the IPMI power off module
      ipmi_poweroff.  It also contains changes to support procfs control of the
      feature.
      
      The power cycle action is considered an optional chassis control in the IPMI
      specification.  However, it is definitely useful when the hardware supports
      it.  A power cycle is usually required in order to reset a firmware in a bad
      state.  This action is critical to allow remote management of servers.
      
      The implementation adds power cycle as optional to the ipmi_poweroff module.
      It can be modified dynamically through the proc entry mentioned above.  During
      a power down and enabled, the power cycle command is sent to the BMC firmware.
       If it fails either due to non-support or some error, it will retry to send
      the command as power off.
      Signed-off-by: NChristopher A. Poblete <Chris_Poblete@dell.com>
      Signed-off-by: NCorey Minyard <minyard@acm.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3b625943
    • J
      [PATCH] quota: reiserfs: improve quota credit estimates · 556a2a45
      Jan Kara 提交于
      Use improved credits estimates for quota operations.  Also reserve space
      for a quota operation in a transaction only if filesystem was mounted with
      some quota option.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      556a2a45
    • J
      [PATCH] quota: ext3: Improve quota credit estimates · 1f54587b
      Jan Kara 提交于
      Use improved credits estimates for quota operations.  Also reserve a space
      for a quota operation in a transaction only if filesystem was mounted with
      some quota options.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1f54587b
    • J
      [PATCH] quota: improve credits estimates · 4e5117ba
      Jan Kara 提交于
      Improve estimates on the number of needed credits for quota transaction.
      Now we distinguish blocks that might need to be allocated and blocks that
      only need to be rewritten.  Also we distinguish deleting of a quota
      structure and creating of a new one.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4e5117ba
    • C
      [PATCH] pass iocb to dio_iodone_t · 92198f7e
      Christoph Hellwig 提交于
      XFS will have to look at iocb->private to fix aio+dio.  No other filesystem
      is using the blockdev_direct_IO* end_io callback.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      92198f7e
    • D
      [PATCH] Keys: Make request-key create an authorisation key · 3e30148c
      David Howells 提交于
      The attached patch makes the following changes:
      
       (1) There's a new special key type called ".request_key_auth".
      
           This is an authorisation key for when one process requests a key and
           another process is started to construct it. This type of key cannot be
           created by the user; nor can it be requested by kernel services.
      
           Authorisation keys hold two references:
      
           (a) Each refers to a key being constructed. When the key being
           	 constructed is instantiated the authorisation key is revoked,
           	 rendering it of no further use.
      
           (b) The "authorising process". This is either:
      
           	 (i) the process that called request_key(), or:
      
           	 (ii) if the process that called request_key() itself had an
           	      authorisation key in its session keyring, then the authorising
           	      process referred to by that authorisation key will also be
           	      referred to by the new authorisation key.
      
      	 This means that the process that initiated a chain of key requests
      	 will authorise the lot of them, and will, by default, wind up with
      	 the keys obtained from them in its keyrings.
      
       (2) request_key() creates an authorisation key which is then passed to
           /sbin/request-key in as part of a new session keyring.
      
       (3) When request_key() is searching for a key to hand back to the caller, if
           it comes across an authorisation key in the session keyring of the
           calling process, it will also search the keyrings of the process
           specified therein and it will use the specified process's credentials
           (fsuid, fsgid, groups) to do that rather than the calling process's
           credentials.
      
           This allows a process started by /sbin/request-key to find keys belonging
           to the authorising process.
      
       (4) A key can be read, even if the process executing KEYCTL_READ doesn't have
           direct read or search permission if that key is contained within the
           keyrings of a process specified by an authorisation key found within the
           calling process's session keyring, and is searchable using the
           credentials of the authorising process.
      
           This allows a process started by /sbin/request-key to read keys belonging
           to the authorising process.
      
       (5) The magic KEY_SPEC_*_KEYRING key IDs when passed to KEYCTL_INSTANTIATE or
           KEYCTL_NEGATE will specify a keyring of the authorising process, rather
           than the process doing the instantiation.
      
       (6) One of the process keyrings can be nominated as the default to which
           request_key() should attach new keys if not otherwise specified. This is
           done with KEYCTL_SET_REQKEY_KEYRING and one of the KEY_REQKEY_DEFL_*
           constants. The current setting can also be read using this call.
      
       (7) request_key() is partially interruptible. If it is waiting for another
           process to finish constructing a key, it can be interrupted. This permits
           a request-key cycle to be broken without recourse to rebooting.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-Off-By: NBenoit Boissinot <benoit.boissinot@ens-lyon.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3e30148c
    • D
      [PATCH] Keys: Pass session keyring to call_usermodehelper() · 7888e7ff
      David Howells 提交于
      The attached patch makes it possible to pass a session keyring through to the
      process spawned by call_usermodehelper().  This allows patch 3/3 to pass an
      authorisation key through to /sbin/request-key, thus permitting better access
      controls when doing just-in-time key creation.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7888e7ff
    • D
      [PATCH] keys: Discard key spinlock and use RCU for key payload · 76d8aeab
      David Howells 提交于
      The attached patch changes the key implementation in a number of ways:
      
       (1) It removes the spinlock from the key structure.
      
       (2) The key flags are now accessed using atomic bitops instead of
           write-locking the key spinlock and using C bitwise operators.
      
           The three instantiation flags are dealt with with the construction
           semaphore held during the request_key/instantiate/negate sequence, thus
           rendering the spinlock superfluous.
      
           The key flags are also now bit numbers not bit masks.
      
       (3) The key payload is now accessed using RCU. This permits the recursive
           keyring search algorithm to be simplified greatly since no locks need be
           taken other than the usual RCU preemption disablement. Searching now does
           not require any locks or semaphores to be held; merely that the starting
           keyring be pinned.
      
       (4) The keyring payload now includes an RCU head so that it can be disposed
           of by call_rcu(). This requires that the payload be copied on unlink to
           prevent introducing races in copy-down vs search-up.
      
       (5) The user key payload is now a structure with the data following it. It
           includes an RCU head like the keyring payload and for the same reason. It
           also contains a data length because the data length in the key may be
           changed on another CPU whilst an RCU protected read is in progress on the
           payload. This would then see the supposed RCU payload and the on-key data
           length getting out of sync.
      
           I'm tempted to drop the key's datalen entirely, except that it's used in
           conjunction with quota management and so is a little tricky to get rid
           of.
      
       (6) Update the keys documentation.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      76d8aeab
    • S
      [TCP]: Report congestion control algorithm in tcp_diag. · 056ede6c
      Stephen Hemminger 提交于
      Enhancement to the tcp_diag interface used by the iproute2 ss command
      to report the tcp congestion control being used by a socket.
      Signed-off-by: NStephen Hemminger <shemminger@osdl.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      056ede6c
    • S
      [TCP]: Add pluggable congestion control algorithm infrastructure. · 317a76f9
      Stephen Hemminger 提交于
      Allow TCP to have multiple pluggable congestion control algorithms.
      Algorithms are defined by a set of operations and can be built in
      or modules.  The legacy "new RENO" algorithm is used as a starting
      point and fallback.
      Signed-off-by: NStephen Hemminger <shemminger@osdl.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      317a76f9
    • A
      [PATCH] better USB_MON dependencies · 4749f32d
      Adrian Bunk 提交于
      This makes the USB_MON less confusing.
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4749f32d
    • A
      [PATCH] Introduce tty_unregister_ldisc() · bfb07599
      Alexey Dobriyan 提交于
      It's a bit strange to see tty_register_ldisc call in modules' exit
      functions.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bfb07599
    • B
      [PATCH] aio: make wait_queue ->task ->private · c43dc2fd
      Benjamin LaHaise 提交于
      In the upcoming aio_down patch, it is useful to store a private data
      pointer in the kiocb's wait_queue.  Since we provide our own wake up
      function and do not require the task_struct pointer, it makes sense to
      convert the task pointer into a generic private pointer.
      Signed-off-by: NBenjamin LaHaise <benjamin.c.lahaise@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c43dc2fd
    • C
      [PATCH] remove <linux/xattr_acl.h> · 9a59f452
      Christoph Hellwig 提交于
      This file duplicates <linux/posix_acl_xattr.h>, using slightly different
      names.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9a59f452
    • C
      [PATCH] acl endianess annotations · f9fd27a2
      Christoph Hellwig 提交于
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f9fd27a2
    • C
      [PATCH] Remove f_error field from struct file · 45778ca8
      Christoph Lameter 提交于
      The following patch removes the f_error field and all checks of f_error.
      
      Trond said:
      
        f_error was introduced for NFS, and made sense when we were guaranteed
        always to have a file pointer around when write errors occurred.  Since
        then, we have (for various reasons) had to introduce the nfs_open_context in
        order to track the file read/write state, and it made sense to move our
        f_error tracking there too.
      Signed-off-by: NChristoph Lameter <christoph@lameter.com>
      Acked-by: NTrond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      45778ca8
    • A
      [PATCH] block: add unlocked_ioctl support for block devices · bb93e3a5
      Arnd Bergmann 提交于
      This patch allows block device drivers to convert their ioctl functions to
      unlocked_ioctl() like character devices and other subsystems.  All
      functions that were called with the BKL held before are still used that
      way, but I would not be surprised if it could be removed from the ioctl
      functions in drivers/block/ioctl.c themselves.
      
      As a side note, I found that compat_blkdev_ioctl() acquires the BKL as
      well, which looks like a bug.  I have checked that every user of
      disk->fops->compat_ioctl() in the current git tree gets the BKL itself, so
      it could easily be removed from compat_blkdev_ioctl().
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bb93e3a5
    • P
      [PATCH] Improve CD/DVD packet driver write performance · 46c271be
      Peter Osterlund 提交于
      This patch improves write performance for the CD/DVD packet writing driver.
       The logic for switching between reading and writing has been changed so
      that streaming writes are no longer interrupted by read requests.
      Signed-off-by: NPeter Osterlund <petero2@telia.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46c271be
    • Y
      [PATCH] Don't force O_LARGEFILE for 32 bit processes on ia64 · ef3daeda
      Yoav Zach 提交于
      In ia64 kernel, the O_LARGEFILE flag is forced when opening a file.  This
      is problematic for execution of 32 bit processes, which are not largefile
      aware, either by SW emulation or by HW execution.
      
      For such processes, the problem is two-fold:
      
      1) When trying to open a file that is larger than 4G
         the operation should fail, but it's not
      2) Writing to offset larger than 4G should fail, but
         it's not
      
      The proposed patch takes advantage of the way 32 bit processes are
      identified in ia64 systems.  Such processes have PER_LINUX32 for their
      personality.  With the patch, the ia64 kernel will not enforce the
      O_LARGEFILE flag if the current process has PER_LINUX32 set.  The behavior
      for all other architectures remains unchanged.
      Signed-off-by: NYoav Zach <yoav.zach@intel.com>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ef3daeda
    • A
      [PATCH] setuid core dump · d6e71144
      Alan Cox 提交于
      Add a new `suid_dumpable' sysctl:
      
      This value can be used to query and set the core dump mode for setuid
      or otherwise protected/tainted binaries. The modes are
      
      0 - (default) - traditional behaviour.  Any process which has changed
          privilege levels or is execute only will not be dumped
      
      1 - (debug) - all processes dump core when possible.  The core dump is
          owned by the current user and no security is applied.  This is intended
          for system debugging situations only.  Ptrace is unchecked.
      
      2 - (suidsafe) - any binary which normally would not be dumped is dumped
          readable by root only.  This allows the end user to remove such a dump but
          not access it directly.  For security reasons core dumps in this mode will
          not overwrite one another or other files.  This mode is appropriate when
          adminstrators are attempting to debug problems in a normal environment.
      
      (akpm:
      
      > > +EXPORT_SYMBOL(suid_dumpable);
      >
      > EXPORT_SYMBOL_GPL?
      
      No problem to me.
      
      > >  	if (current->euid == current->uid && current->egid == current->gid)
      > >  		current->mm->dumpable = 1;
      >
      > Should this be SUID_DUMP_USER?
      
      Actually the feedback I had from last time was that the SUID_ defines
      should go because its clearer to follow the numbers. They can go
      everywhere (and there are lots of places where dumpable is tested/used
      as a bool in untouched code)
      
      > Maybe this should be renamed to `dump_policy' or something.  Doing that
      > would help us catch any code which isn't using the #defines, too.
      
      Fair comment. The patch was designed to be easy to maintain for Red Hat
      rather than for merging. Changing that field would create a gigantic
      diff because it is used all over the place.
      
      )
      Signed-off-by: NAlan Cox <alan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d6e71144
    • P
      [PATCH] kprobes: Temporary disarming of reentrant probe · ea32c65c
      Prasanna S Panchamukhi 提交于
      In situations where a kprobes handler calls a routine which has a probe on it,
      then kprobes_handler() disarms the new probe forever.  This patch removes the
      above limitation by temporarily disarming the new probe.  When the another
      probe hits while handling the old probe, the kprobes_handler() saves previous
      kprobes state and handles the new probe without calling the new kprobes
      registered handlers.  kprobe_post_handler() restores back the previous kprobes
      state and the normal execution continues.
      
      However on x86_64 architecture, re-rentrancy is provided only through
      pre_handler().  If a routine having probe is referenced through
      post_handler(), then the probes on that routine are disarmed forever, since
      the exception stack is gets changed after the processor single steps the
      instruction of the new probe.
      
      This patch includes generic changes to support temporary disarming on
      reentrancy of probes.
      Signed-of-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ea32c65c
    • H
      [PATCH] kprobes: moves lock-unlock to non-arch kprobe_flush_task · 0aa55e4d
      Hien Nguyen 提交于
      This patch moves the lock/unlock of the arch specific kprobe_flush_task()
      to the non-arch specific kprobe_flusk_task().
      Signed-off-by: NHien Nguyen <hien@us.ibm.com>
      Acked-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0aa55e4d
    • R
      [PATCH] Move kprobe [dis]arming into arch specific code · 7e1048b1
      Rusty Lynch 提交于
      The architecture independent code of the current kprobes implementation is
      arming and disarming kprobes at registration time.  The problem is that the
      code is assuming that arming and disarming is a just done by a simple write
      of some magic value to an address.  This is problematic for ia64 where our
      instructions look more like structures, and we can not insert break points
      by just doing something like:
      
      *p->addr = BREAKPOINT_INSTRUCTION;
      
      The following patch to 2.6.12-rc4-mm2 adds two new architecture dependent
      functions:
      
           * void arch_arm_kprobe(struct kprobe *p)
           * void arch_disarm_kprobe(struct kprobe *p)
      
      and then adds the new functions for each of the architectures that already
      implement kprobes (spar64/ppc64/i386/x86_64).
      
      I thought arch_[dis]arm_kprobe was the most descriptive of what was really
      happening, but each of the architectures already had a disarm_kprobe()
      function that was really a "disarm and do some other clean-up items as
      needed when you stumble across a recursive kprobe." So...  I took the
      liberty of changing the code that was calling disarm_kprobe() to call
      arch_disarm_kprobe(), and then do the cleanup in the block of code dealing
      with the recursive kprobe case.
      
      So far this patch as been tested on i386, x86_64, and ppc64, but still
      needs to be tested in sparc64.
      Signed-off-by: NRusty Lynch <rusty.lynch@intel.com>
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7e1048b1
    • H
      [PATCH] kprobes: function-return probes · b94cce92
      Hien Nguyen 提交于
      This patch adds function-return probes to kprobes for the i386
      architecture.  This enables you to establish a handler to be run when a
      function returns.
      
      1. API
      
      Two new functions are added to kprobes:
      
      	int register_kretprobe(struct kretprobe *rp);
      	void unregister_kretprobe(struct kretprobe *rp);
      
      2. Registration and unregistration
      
      2.1 Register
      
        To register a function-return probe, the user populates the following
        fields in a kretprobe object and calls register_kretprobe() with the
        kretprobe address as an argument:
      
        kp.addr - the function's address
      
        handler - this function is run after the ret instruction executes, but
        before control returns to the return address in the caller.
      
        maxactive - The maximum number of instances of the probed function that
        can be active concurrently.  For example, if the function is non-
        recursive and is called with a spinlock or mutex held, maxactive = 1
        should be enough.  If the function is non-recursive and can never
        relinquish the CPU (e.g., via a semaphore or preemption), NR_CPUS should
        be enough.  maxactive is used to determine how many kretprobe_instance
        objects to allocate for this particular probed function.  If maxactive <=
        0, it is set to a default value (if CONFIG_PREEMPT maxactive=max(10, 2 *
        NR_CPUS) else maxactive=NR_CPUS)
      
        For example:
      
          struct kretprobe rp;
          rp.kp.addr = /* entrypoint address */
          rp.handler = /*return probe handler */
          rp.maxactive = /* e.g., 1 or NR_CPUS or 0, see the above explanation */
          register_kretprobe(&rp);
      
        The following field may also be of interest:
      
        nmissed - Initialized to zero when the function-return probe is
        registered, and incremented every time the probed function is entered but
        there is no kretprobe_instance object available for establishing the
        function-return probe (i.e., because maxactive was set too low).
      
      2.2 Unregister
      
        To unregiter a function-return probe, the user calls
        unregister_kretprobe() with the same kretprobe object as registered
        previously.  If a probed function is running when the return probe is
        unregistered, the function will return as expected, but the handler won't
        be run.
      
      3. Limitations
      
      3.1 This patch supports only the i386 architecture, but patches for
          x86_64 and ppc64 are anticipated soon.
      
      3.2 Return probes operates by replacing the return address in the stack
          (or in a known register, such as the lr register for ppc).  This may
          cause __builtin_return_address(0), when invoked from the return-probed
          function, to return the address of the return-probes trampoline.
      
      3.3 This implementation uses the "Multiprobes at an address" feature in
          2.6.12-rc3-mm3.
      
      3.4 Due to a limitation in multi-probes, you cannot currently establish
          a return probe and a jprobe on the same function.  A patch to remove
          this limitation is being tested.
      
      This feature is required by SystemTap (http://sourceware.org/systemtap),
      and reflects ideas contributed by several SystemTap developers, including
      Will Cohen and Ananth Mavinakayanahalli.
      Signed-off-by: NHien Nguyen <hien@us.ibm.com>
      Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NFrederik Deweerdt <frederik.deweerdt@laposte.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b94cce92
    • C
      [PATCH] quota: consolidate code surrounding vfs_quota_on_mount · 84de856e
      Christoph Hellwig 提交于
      Move some code duplicated in both callers into vfs_quota_on_mount
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NJan Kara <jack@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      84de856e
    • N
      [PATCH] add check to /proc/devices read routines · ac20427e
      Neil Horman 提交于
      Patch to add check to get_chrdev_list and get_blkdev_list to prevent reads
      of /proc/devices from spilling over the provided page if more than 4096
      bytes of string data are generated from all the registered character and
      block devices in a system
      Signed-off-by: NNeil Horman <nhorman@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: <viro@parcelfarce.linux.theplanet.co.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ac20427e
    • N
      [PATCH] optimise loop driver a bit · 35a82d1a
      Nick Piggin 提交于
      Looks like locking can be optimised quite a lot.  Increase lock widths
      slightly so lo_lock is taken fewer times per request.  Also it was quite
      trivial to cover lo_pending with that lock, and remove the atomic
      requirement.  This also makes memory ordering explicitly correct, which is
      nice (not that I particularly saw any mem ordering bugs).
      
      Test was reading 4 250MB files in parallel on ext2-on-tmpfs filesystem (1K
      block size, 4K page size).  System is 2 socket Xeon with HT (4 thread).
      
      intel:/home/npiggin# umount /dev/loop0 ; mount /dev/loop0 /mnt/loop ; /usr/bin/time ./mtloop.sh
      
      Before:
      0.24user 5.51system 0:02.84elapsed 202%CPU (0avgtext+0avgdata 0maxresident)k
      0.19user 5.52system 0:02.88elapsed 198%CPU (0avgtext+0avgdata 0maxresident)k
      0.19user 5.57system 0:02.89elapsed 198%CPU (0avgtext+0avgdata 0maxresident)k
      0.22user 5.51system 0:02.90elapsed 197%CPU (0avgtext+0avgdata 0maxresident)k
      0.19user 5.44system 0:02.91elapsed 193%CPU (0avgtext+0avgdata 0maxresident)k
      
      After:
      0.07user 2.34system 0:01.68elapsed 143%CPU (0avgtext+0avgdata 0maxresident)k
      0.06user 2.37system 0:01.68elapsed 144%CPU (0avgtext+0avgdata 0maxresident)k
      0.06user 2.39system 0:01.68elapsed 145%CPU (0avgtext+0avgdata 0maxresident)k
      0.06user 2.36system 0:01.68elapsed 144%CPU (0avgtext+0avgdata 0maxresident)k
      0.06user 2.42system 0:01.68elapsed 147%CPU (0avgtext+0avgdata 0maxresident)k
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      35a82d1a
    • P
      [PATCH] create a kstrdup library function · 543537bd
      Paulo Marques 提交于
      This patch creates a new kstrdup library function and changes the "local"
      implementations in several places to use this function.
      
      Most of the changes come from the sound and net subsystems.  The sound part
      had already been acknowledged by Takashi Iwai and the net part by David S.
      Miller.
      
      I left UML alone for now because I would need more time to read the code
      carefully before making changes there.
      Signed-off-by: NPaulo Marques <pmarques@grupopie.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      543537bd
    • A
      [PATCH] fix for prune_icache()/forced final iput() races · 991114c6
      Alexander Viro 提交于
      Based on analysis and a patch from Russ Weight <rweight@us.ibm.com>
      
      There is a race condition that can occur if an inode is allocated and then
      released (using iput) during the ->fill_super functions.  The race
      condition is between kswapd and mount.
      
      For most filesystems this can only happen in an error path when kswapd is
      running concurrently.  For isofs, however, the error can occur in a more
      common code path (which is how the bug was found).
      
      The logic here is "we want final iput() to free inode *now* instead of
      letting it sit in cache if fs is going down or had not quite come up".  The
      problem is with kswapd seeing such inodes in the middle of being killed and
      happily taking over.
      
      The clean solution would be to tell kswapd to leave those inodes alone and
      let our final iput deal with them.  I.e.  add a new flag
      (I_FORCED_FREEING), set it before write_inode_now() there and make
      prune_icache() leave those alone.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      991114c6
    • O
      [PATCH] timers: introduce try_to_del_timer_sync() · fd450b73
      Oleg Nesterov 提交于
      This patch splits del_timer_sync() into 2 functions.  The new one,
      try_to_del_timer_sync(), returns -1 when it hits executing timer.
      
      It can be used in interrupt context, or when the caller hold locks which
      can prevent completion of the timer's handler.
      
      NOTE.  Currently it can't be used in interrupt context in UP case, because
      ->running_timer is used only with CONFIG_SMP.
      
      Should the need arise, it is possible to kill #ifdef CONFIG_SMP in
      set_running_timer(), it is cheap.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fd450b73
    • O
      [PATCH] timers fixes/improvements · 55c888d6
      Oleg Nesterov 提交于
      This patch tries to solve following problems:
      
      1. del_timer_sync() is racy. The timer can be fired again after
         del_timer_sync have checked all cpus and before it will recheck
         timer_pending().
      
      2. It has scalability problems. All cpus are scanned to determine
         if the timer is running on that cpu.
      
         With this patch del_timer_sync is O(1) and no slower than plain
         del_timer(pending_timer), unless it has to actually wait for
         completion of the currently running timer.
      
         The only restriction is that the recurring timer should not use
         add_timer_on().
      
      3. The timers are not serialized wrt to itself.
      
         If CPU_0 does mod_timer(jiffies+1) while the timer is currently
         running on CPU 1, it is quite possible that local interrupt on
         CPU_0 will start that timer before it finished on CPU_1.
      
      4. The timers locking is suboptimal. __mod_timer() takes 3 locks
         at once and still requires wmb() in del_timer/run_timers.
      
         The new implementation takes 2 locks sequentially and does not
         need memory barriers.
      
      Currently ->base != NULL means that the timer is pending. In that case
      ->base.lock is used to lock the timer. __mod_timer also takes timer->lock
      because ->base can be == NULL.
      
      This patch uses timer->entry.next != NULL as indication that the timer is
      pending. So it does __list_del(), entry->next = NULL instead of list_del()
      when the timer is deleted.
      
      The ->base field is used for hashed locking only, it is initialized
      in init_timer() which sets ->base = per_cpu(tvec_bases). When the
      tvec_bases.lock is locked, it means that all timers which are tied
      to this base via timer->base are locked, and the base itself is locked
      too.
      
      So __run_timers/migrate_timers can safely modify all timers which could
      be found on ->tvX lists (pending timers).
      
      When the timer's base is locked, and the timer removed from ->entry list
      (which means that _run_timers/migrate_timers can't see this timer), it is
      possible to set timer->base = NULL and drop the lock: the timer remains
      locked.
      
      This patch adds lock_timer_base() helper, which waits for ->base != NULL,
      locks the ->base, and checks it is still the same.
      
      __mod_timer() schedules the timer on the local CPU and changes it's base.
      However, it does not lock both old and new bases at once. It locks the
      timer via lock_timer_base(), deletes the timer, sets ->base = NULL, and
      unlocks old base. Then __mod_timer() locks new_base, sets ->base = new_base,
      and adds this timer. This simplifies the code, because AB-BA deadlock is not
      possible. __mod_timer() also ensures that the timer's base is not changed
      while the timer's handler is running on the old base.
      
      __run_timers(), del_timer() do not change ->base anymore, they only clear
      pending flag.
      
      So del_timer_sync() can test timer->base->running_timer == timer to detect
      whether it is running or not.
      
      We don't need timer_list->lock anymore, this patch kills it.
      
      We also don't need barriers. del_timer() and __run_timers() used smp_wmb()
      before clearing timer's pending flag. It was needed because __mod_timer()
      did not lock old_base if the timer is not pending, so __mod_timer()->list_add()
      could race with del_timer()->list_del(). With this patch these functions are
      serialized through base->lock.
      
      One problem. TIMER_INITIALIZER can't use per_cpu(tvec_bases). So this patch
      adds global
      
              struct timer_base_s {
                      spinlock_t lock;
                      struct timer_list *running_timer;
              } __init_timer_base;
      
      which is used by TIMER_INITIALIZER. The corresponding fields in tvec_t_base_s
      struct are replaced by struct timer_base_s t_base.
      
      It is indeed ugly. But this can't have scalability problems. The global
      __init_timer_base.lock is used only when __mod_timer() is called for the first
      time AND the timer was compile time initialized. After that the timer migrates
      to the local CPU.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NRenaud Lienhart <renaud.lienhart@free.fr>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      55c888d6
    • T
      [PATCH] blk: remove BLK_TAGS_{PER_LONG|MASK} · f7d37d02
      Tejun Heo 提交于
      Replace BLK_TAGS_PER_LONG with BITS_PER_LONG and remove unused BLK_TAGS_MASK.
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Acked-by: NJens Axboe <axboe@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f7d37d02
    • T
      [PATCH] blk: remove blk_queue_tag->real_max_depth optimization · fa72b903
      Tejun Heo 提交于
      blk_queue_tag->real_max_depth was used to optimize out unnecessary
      allocations/frees on tag resize.  However, the whole thing was very broken -
      tag_map was never allocated to real_max_depth resulting in access beyond the
      end of the map, bits in [max_depth..real_max_depth] were set when initializing
      a map and copied when resizing resulting in pre-occupied tags.
      
      As the gain of the optimization is very small, well, almost nill, remove the
      whole thing.
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Acked-by: NJens Axboe <axboe@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fa72b903
    • C
      [PATCH] NUMA aware block device control structure allocation · 1946089a
      Christoph Lameter 提交于
      Patch to allocate the control structures for for ide devices on the node of
      the device itself (for NUMA systems).  The patch depends on the Slab API
      change patch by Manfred and me (in mm) and the pcidev_to_node patch that I
      posted today.
      
      Does some realignment too.
      Signed-off-by: NJustin M. Forbes <jmforbes@linuxtx.org>
      Signed-off-by: NChristoph Lameter <christoph@lameter.com>
      Signed-off-by: NPravin Shelar <pravin@calsoftinc.com>
      Signed-off-by: NShobhit Dayal <shobhit@calsoftinc.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1946089a
    • A
      [PATCH] sparsemem hotplug base · 29751f69
      Andy Whitcroft 提交于
      Make sparse's initalization be accessible at runtime.  This allows sparse
      mappings to be created after boot in a hotplug situation.
      
      This patch is separated from the previous one just to give an indication how
      much of the sparse infrastructure is *just* for hotplug memory.
      
      The section_mem_map doesn't really store a pointer.  It stores something that
      is convenient to do some math against to get a pointer.  It isn't valid to
      just do *section_mem_map, so I don't think it should be stored as a pointer.
      
      There are a couple of things I'd like to store about a section.  First of all,
      the fact that it is !NULL does not mean that it is present.  There could be
      such a combination where section_mem_map *is* NULL, but the math gets you
      properly to a real mem_map.  So, I don't think that check is safe.
      
      Since we're storing 32-bit-aligned structures, we have a few bits in the
      bottom of the pointer to play with.  Use one bit to encode whether there's
      really a mem_map there, and the other one to tell whether there's a valid
      section there.  We need to distinguish between the two because sometimes
      there's a gap between when a section is discovered to be present and when we
      can get the mem_map for it.
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NBob Picco <bob.picco@hp.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      29751f69
    • A
      [PATCH] sparsemem swiss cheese numa layouts · 641c7673
      Andy Whitcroft 提交于
      The part of the sparsemem patch which modifies memmap_init_zone() has recently
      become a problem.  It changes behavior so that there is a call to
      pfn_to_page() for each individual page inside of a node's range:
      node_start_pfn through node_end_pfn.  It used to simply do this once, at the
      beginning of the node, but having sparsemem's non-contiguous mem_map[]s inside
      of a node made it necessary to change.
      
      Mike Kravetz recently wrote a patch which made the NUMA code accept some new
      kinds of layouts.  The system's memory was laid out like this, with node 0's
      memory in two pieces: one before and one after node 1's memory:
      
      	Node 0: +++++     +++++
      	Node 1:      +++++
      
      Previous behavior before Mike's patch was to assign nodes like this:
      
      	Node 0: 00000     XXXXX
      	Node 1:      11111
      
      Where the 'X' areas were simply thrown away.  The new behavior was to make the
      pg_data_t span node 0 across all of its areas, including areas that are really
      node 1's: Node 0: 000000000000000 Node 1: 11111
      
      This wastes a little bit of mem_map space, but ends up being OK, and more
      fully utilizes the system's memory.  memmap_init_zone() initializes all of the
      "struct page"s for node 0, even for the "hole", but those never get used,
      because there is no pfn_to_page() that resolves to those pages.  However, only
      calling pfn_to_page() once, memmap_init_zone() always uses the pages that were
      allocated for node0->node_mem_map because:
      
      	struct page *start = pfn_to_page(start_pfn);
      	// effectively start = &node->node_mem_map[0]
      	for (page = start; page < (start + size); page++) {
      		init_page_here();...
      		page++;
      	}
      
      Slow, and wasteful, but generally harmless.
      
      But, modify that to call pfn_to_page() for each loop iteration (like sparsemem
      does):
      
      	for (pfn = start_pfn; pfn < < (start_pfn + size); pfn++++) {
      		page = pfn_to_page(pfn);
      	}
      
      And you end up trying to initialize node 1's pages too early, along with bogus
      data from node 0.  This patch checks for those weird layouts and declines to
      touch the pages, making the more frequent pfn_to_page() calls OK to do.
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      641c7673
    • A
      [PATCH] sparsemem memory model · d41dee36
      Andy Whitcroft 提交于
      Sparsemem abstracts the use of discontiguous mem_maps[].  This kind of
      mem_map[] is needed by discontiguous memory machines (like in the old
      CONFIG_DISCONTIGMEM case) as well as memory hotplug systems.  Sparsemem
      replaces DISCONTIGMEM when enabled, and it is hoped that it can eventually
      become a complete replacement.
      
      A significant advantage over DISCONTIGMEM is that it's completely separated
      from CONFIG_NUMA.  When producing this patch, it became apparent in that NUMA
      and DISCONTIG are often confused.
      
      Another advantage is that sparse doesn't require each NUMA node's ranges to be
      contiguous.  It can handle overlapping ranges between nodes with no problems,
      where DISCONTIGMEM currently throws away that memory.
      
      Sparsemem uses an array to provide different pfn_to_page() translations for
      each SECTION_SIZE area of physical memory.  This is what allows the mem_map[]
      to be chopped up.
      
      In order to do quick pfn_to_page() operations, the section number of the page
      is encoded in page->flags.  Part of the sparsemem infrastructure enables
      sharing of these bits more dynamically (at compile-time) between the
      page_zone() and sparsemem operations.  However, on 32-bit architectures, the
      number of bits is quite limited, and may require growing the size of the
      page->flags type in certain conditions.  Several things might force this to
      occur: a decrease in the SECTION_SIZE (if you want to hotplug smaller areas of
      memory), an increase in the physical address space, or an increase in the
      number of used page->flags.
      
      One thing to note is that, once sparsemem is present, the NUMA node
      information no longer needs to be stored in the page->flags.  It might provide
      speed increases on certain platforms and will be stored there if there is
      room.  But, if out of room, an alternate (theoretically slower) mechanism is
      used.
      
      This patch introduces CONFIG_FLATMEM.  It is used in almost all cases where
      there used to be an #ifndef DISCONTIG, because SPARSEMEM and DISCONTIGMEM
      often have to compile out the same areas of code.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NMartin Bligh <mbligh@aracnet.com>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NBob Picco <bob.picco@hp.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d41dee36
    • A
      [PATCH] generify early_pfn_to_nid · b159d43f
      Andy Whitcroft 提交于
      Provide a default implementation for early_pfn_to_nid returning node 0.  Allow
      architectures to override this with their own implementation out of
      asm/mmzone.h.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NMartin Bligh <mbligh@aracnet.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b159d43f