1. 04 5月, 2006 1 次提交
  2. 29 4月, 2006 2 次提交
  3. 28 4月, 2006 3 次提交
    • A
      [PATCH] powerpc: Wire up *at syscalls · 2833c28a
      Andreas Schwab 提交于
      Wire up *at syscalls.
      
      This patch has been tested on ppc64 (using glibc's testsuite, both 32bit
      and 64bit), and compile-tested for ppc32 (I have currently no ppc32 system
      available, but I expect no problems).
      Signed-off-by: NAndreas Schwab <schwab@suse.de>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2833c28a
    • D
      [PATCH] powerpc: Use check_legacy_ioport() on ppc32 too. · 1269277a
      David Woodhouse 提交于
      Some people report that we die on some Macs when we are expecting to
      catch machine checks after poking at some random I/O address. I'd seen
      it happen on my dual G4 with serial ports until we fixed those to use
      OF, but now other users are reporting it with i8042.
      
      This expands the use of check_legacy_ioport() to avoid that situation
      even on 32-bit kernels.
      Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      1269277a
    • D
      [PATCH] powerpc: Fix pagetable bloat for hugepages · f10a04c0
      David Gibson 提交于
      At present, ARCH=powerpc kernels can waste considerable space in
      pagetables when making large hugepage mappings.  Hugepage PTEs go in
      PMD pages, but each PMD page maps 256M and so contains only 16
      hugepage PTEs (128 bytes of data), but takes up a 1024 byte
      allocation.  With CONFIG_PPC_64K_PAGES enabled (64k base page size),
      the situation is worse.  Now hugepage PTEs are at the PTE page level
      (also mapping 256M), so we store 16 hugepage PTEs in a 64k allocation.
      
      The PowerPC MMU already means that any 256M region is either all
      hugepage, or all normal pages.  Thus, with some care, we can use a
      different allocation for the hugepage PTE tables and only allocate the
      128 bytes necessary.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f10a04c0
  4. 26 4月, 2006 2 次提交
  5. 25 4月, 2006 1 次提交
  6. 23 4月, 2006 1 次提交
  7. 21 4月, 2006 1 次提交
  8. 18 4月, 2006 1 次提交
    • P
      powerpc: Use correct sequence for putting CPU into nap mode · f39224a8
      Paul Mackerras 提交于
      We weren't using the recommended sequence for putting the CPU into
      nap mode.  When I changed the idle loop, for some reason 7447A cpus
      started hanging when we put them into nap mode.  Changing to the
      recommended sequence fixes that.
      
      The complexity here is that the recommended sequence is a loop that
      keeps putting the cpu back into nap mode.  Clearly we need some way
      to break out of the loop when an interrupt (external interrupt,
      decrementer, performance monitor) occurs.  Here we use a bit in
      the thread_info struct to indicate that we need this, and the exception
      entry code notices this and arranges for the exception to return
      to the value in the link register, thus breaking out of the loop.
      We use a new `local_flags' field in the thread_info which we can
      alter without needing to use an atomic update sequence.
      
      The PPC970 has the same recommended sequence, so we do the same thing
      there too.
      
      This also fixes a bug in the kernel stack overflow handling code on
      32-bit, since it was causing a value that we needed in a register to
      get trashed.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f39224a8
  9. 11 4月, 2006 2 次提交
    • J
      [PATCH] splice: add support for sys_tee() · 70524490
      Jens Axboe 提交于
      Basically an in-kernel implementation of tee, which uses splice and the
      pipe buffers as an intelligent way to pass data around by reference.
      
      Where the user space tee consumes the input and produces a stdout and
      file output, this syscall merely duplicates the data inside a pipe to
      another pipe. No data is copied, the output just grabs a reference to the
      input pipe data.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      70524490
    • Y
      [PATCH] Configurable NODES_SHIFT · c80d79d7
      Yasunori Goto 提交于
      Current implementations define NODES_SHIFT in include/asm-xxx/numnodes.h for
      each arch.  Its definition is sometimes configurable.  Indeed, ia64 defines 5
      NODES_SHIFT values in the current git tree.  But it looks a bit messy.
      
      SGI-SN2(ia64) system requires 1024 nodes, and the number of nodes already has
      been changeable by config.  Suitable node's number may be changed in the
      future even if it is other architecture.  So, I wrote configurable node's
      number.
      
      This patch set defines just default value for each arch which needs multi
      nodes except ia64.  But, it is easy to change to configurable if necessary.
      
      On ia64 the number of nodes can be already configured in generic ia64 and SN2
      config.  But, NODES_SHIFT is defined for DIG64 and HP'S machine too.  So, I
      changed it so that all platforms can be configured via CONFIG_NODES_SHIFT.  It
      would be simpler.
      
      See also: http://marc.theaimsgroup.com/?l=linux-kernel&m=114358010523896&w=2Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Jack Steiner <steiner@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c80d79d7
  10. 04 4月, 2006 1 次提交
  11. 01 4月, 2006 3 次提交
  12. 31 3月, 2006 2 次提交
    • A
      [NET]: Allow skb headroom to be overridden · 025be81e
      Anton Blanchard 提交于
      Previously we added NET_IP_ALIGN so an architecture can override the
      padding done to align headers. The next step is to allow the skb
      headroom to be overridden.
      
      We currently always reserve 16 bytes to grow into, meaning all DMAs
      start 16 bytes into a cacheline. On ppc64 we really want DMA writes to
      start on a cacheline boundary, so we increase that headroom to one
      cacheline.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      025be81e
    • J
      [PATCH] Introduce sys_splice() system call · 5274f052
      Jens Axboe 提交于
      This adds support for the sys_splice system call. Using a pipe as a
      transport, it can connect to files or sockets (latter as output only).
      
      From the splice.c comments:
      
         "splice": joining two ropes together by interweaving their strands.
      
         This is the "extended pipe" functionality, where a pipe is used as
         an arbitrary in-memory buffer. Think of a pipe as a small kernel
         buffer that you can use to transfer data from one end to the other.
      
         The traditional unix read/write is extended with a "splice()" operation
         that transfers data buffers to or from a pipe buffer.
      
         Named by Larry McVoy, original implementation from Linus, extended by
         Jens to support splicing to files and fixing the initial implementation
         bugs.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5274f052
  13. 29 3月, 2006 3 次提交
  14. 28 3月, 2006 12 次提交
    • B
      [PATCH] powerpc: Kill _machine and hard-coded platform numbers · e8222502
      Benjamin Herrenschmidt 提交于
      This removes statically assigned platform numbers and reworks the
      powerpc platform probe code to use a better mechanism.  With this,
      board support files can simply declare a new machine type with a
      macro, and implement a probe() function that uses the flattened
      device-tree to detect if they apply for a given machine.
      
      We now have a machine_is() macro that replaces the comparisons of
      _machine with the various PLATFORM_* constants.  This commit also
      changes various drivers to use the new macro instead of looking at
      _machine.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e8222502
    • A
      [PATCH] git-powerpc: WARN was a dumb idea · 872345b7
      Andrew Morton 提交于
      There are at least 14 different implementations of WARN() in the tree already.
      The build fails all over the place.
      
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      872345b7
    • S
      [PATCH] powerpc: make ISA floppies work again · b239cbe9
      Stephen Rothwell 提交于
      We used to assume that a DMA mapping request with a NULL dev was for
      ISA DMA.  This assumption was broken at some point.  Now we explicitly
      pass the detected ISA PCI device in the floppy setup.
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      b239cbe9
    • R
      [PATCH] powerpc: hvc_console updates · 45d607ed
      Ryan S. Arnold 提交于
      These are some updates from both Ryan and Arnd for the hvc_console
      driver:
      
      The main point is to enable the inclusion of a console driver
      for rtas, which is currrently needed for the cell platform.
      
      Also shuffle around some data-type declarations and moves some
      functions out of include/asm-ppc64/hvconsole.h and into a new
      drivers/char/hvc_console.h file.
      Signed-off-by: N"Ryan S. Arnold" <rsa@us.ibm.com>
      Signed-off-by: NArnd Bergmann <abergman@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      45d607ed
    • M
      [PATCH] powerpc: Rename and export ppc64_firmware_features · d0160bf0
      Michael Ellerman 提交于
      We need to export ppc64_firmware_features for modules. Before we do that
      I think we should probably rename it to powerpc_firmware_features.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d0160bf0
    • A
      [PATCH] powerpc: export validate_sp for oprofile calltrace · 2f25194d
      Anton Blanchard 提交于
      Export validate_sp so we can use it in the oprofile calltrace code.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2f25194d
    • A
      [PATCH] powerpc: Remove some ifdefs in oprofile_impl.h · 72533db0
      Anton Blanchard 提交于
      - No one uses op_counter_config.valid, so remove it
      - No need to ifdef around function protypes.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      72533db0
    • P
      ppc: Remove CHRP, POWER3 and POWER4 support from arch/ppc · 0a26b136
      Paul Mackerras 提交于
      32-bit CHRP machines are now supported only in arch/powerpc, as are
      all 64-bit PowerPC processors.  This means that we don't use
      Open Firmware on any platform in arch/ppc any more.
      
      This makes PReP support a single-platform option like every other
      platform support option in arch/ppc now, thus CONFIG_PPC_MULTIPLATFORM
      is gone from arch/ppc.  CONFIG_PPC_PREP is the option that selects
      PReP support and is generally what has replaced
      CONFIG_PPC_MULTIPLATFORM within arch/ppc.
      
      _machine is all but dead now, being #defined to 0.
      
      Updated Makefiles, comments and Kconfig options generally to reflect
      these changes.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      0a26b136
    • A
      [PATCH] Notifier chain update: API changes · e041c683
      Alan Stern 提交于
      The kernel's implementation of notifier chains is unsafe.  There is no
      protection against entries being added to or removed from a chain while the
      chain is in use.  The issues were discussed in this thread:
      
          http://marc.theaimsgroup.com/?l=linux-kernel&m=113018709002036&w=2
      
      We noticed that notifier chains in the kernel fall into two basic usage
      classes:
      
      	"Blocking" chains are always called from a process context
      	and the callout routines are allowed to sleep;
      
      	"Atomic" chains can be called from an atomic context and
      	the callout routines are not allowed to sleep.
      
      We decided to codify this distinction and make it part of the API.  Therefore
      this set of patches introduces three new, parallel APIs: one for blocking
      notifiers, one for atomic notifiers, and one for "raw" notifiers (which is
      really just the old API under a new name).  New kinds of data structures are
      used for the heads of the chains, and new routines are defined for
      registration, unregistration, and calling a chain.  The three APIs are
      explained in include/linux/notifier.h and their implementation is in
      kernel/sys.c.
      
      With atomic and blocking chains, the implementation guarantees that the chain
      links will not be corrupted and that chain callers will not get messed up by
      entries being added or removed.  For raw chains the implementation provides no
      guarantees at all; users of this API must provide their own protections.  (The
      idea was that situations may come up where the assumptions of the atomic and
      blocking APIs are not appropriate, so it should be possible for users to
      handle these things in their own way.)
      
      There are some limitations, which should not be too hard to live with.  For
      atomic/blocking chains, registration and unregistration must always be done in
      a process context since the chain is protected by a mutex/rwsem.  Also, a
      callout routine for a non-raw chain must not try to register or unregister
      entries on its own chain.  (This did happen in a couple of places and the code
      had to be changed to avoid it.)
      
      Since atomic chains may be called from within an NMI handler, they cannot use
      spinlocks for synchronization.  Instead we use RCU.  The overhead falls almost
      entirely in the unregister routine, which is okay since unregistration is much
      less frequent that calling a chain.
      
      Here is the list of chains that we adjusted and their classifications.  None
      of them use the raw API, so for the moment it is only a placeholder.
      
        ATOMIC CHAINS
        -------------
      arch/i386/kernel/traps.c:		i386die_chain
      arch/ia64/kernel/traps.c:		ia64die_chain
      arch/powerpc/kernel/traps.c:		powerpc_die_chain
      arch/sparc64/kernel/traps.c:		sparc64die_chain
      arch/x86_64/kernel/traps.c:		die_chain
      drivers/char/ipmi/ipmi_si_intf.c:	xaction_notifier_list
      kernel/panic.c:				panic_notifier_list
      kernel/profile.c:			task_free_notifier
      net/bluetooth/hci_core.c:		hci_notifier
      net/ipv4/netfilter/ip_conntrack_core.c:	ip_conntrack_chain
      net/ipv4/netfilter/ip_conntrack_core.c:	ip_conntrack_expect_chain
      net/ipv6/addrconf.c:			inet6addr_chain
      net/netfilter/nf_conntrack_core.c:	nf_conntrack_chain
      net/netfilter/nf_conntrack_core.c:	nf_conntrack_expect_chain
      net/netlink/af_netlink.c:		netlink_chain
      
        BLOCKING CHAINS
        ---------------
      arch/powerpc/platforms/pseries/reconfig.c:	pSeries_reconfig_chain
      arch/s390/kernel/process.c:		idle_chain
      arch/x86_64/kernel/process.c		idle_notifier
      drivers/base/memory.c:			memory_chain
      drivers/cpufreq/cpufreq.c		cpufreq_policy_notifier_list
      drivers/cpufreq/cpufreq.c		cpufreq_transition_notifier_list
      drivers/macintosh/adb.c:		adb_client_list
      drivers/macintosh/via-pmu.c		sleep_notifier_list
      drivers/macintosh/via-pmu68k.c		sleep_notifier_list
      drivers/macintosh/windfarm_core.c	wf_client_list
      drivers/usb/core/notify.c		usb_notifier_list
      drivers/video/fbmem.c			fb_notifier_list
      kernel/cpu.c				cpu_chain
      kernel/module.c				module_notify_list
      kernel/profile.c			munmap_notifier
      kernel/profile.c			task_exit_notifier
      kernel/sys.c				reboot_notifier_list
      net/core/dev.c				netdev_chain
      net/decnet/dn_dev.c:			dnaddr_chain
      net/ipv4/devinet.c:			inetaddr_chain
      
      It's possible that some of these classifications are wrong.  If they are,
      please let us know or submit a patch to fix them.  Note that any chain that
      gets called very frequently should be atomic, because the rwsem read-locking
      used for blocking chains is very likely to incur cache misses on SMP systems.
      (However, if the chain's callout routines may sleep then the chain cannot be
      atomic.)
      
      The patch set was written by Alan Stern and Chandra Seetharaman, incorporating
      material written by Keith Owens and suggestions from Paul McKenney and Andrew
      Morton.
      
      [jes@sgi.com: restructure the notifier chain initialization macros]
      Signed-off-by: NAlan Stern <stern@rowland.harvard.edu>
      Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com>
      Signed-off-by: NJes Sorensen <jes@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e041c683
    • I
      [PATCH] lightweight robust futexes updates · 8f17d3a5
      Ingo Molnar 提交于
      - fix: initialize the robust list(s) to NULL in copy_process.
      
      - doc update
      
      - cleanup: rename _inuser to _inatomic
      
      - __user cleanups and other small cleanups
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8f17d3a5
    • I
      [PATCH] lightweight robust futexes: arch defaults · e9056f13
      Ingo Molnar 提交于
      This patchset provides a new (written from scratch) implementation of robust
      futexes, called "lightweight robust futexes".  We believe this new
      implementation is faster and simpler than the vma-based robust futex solutions
      presented before, and we'd like this patchset to be adopted in the upstream
      kernel.  This is version 1 of the patchset.
      
        Background
        ----------
      
      What are robust futexes?  To answer that, we first need to understand what
      futexes are: normal futexes are special types of locks that in the
      noncontended case can be acquired/released from userspace without having to
      enter the kernel.
      
      A futex is in essence a user-space address, e.g.  a 32-bit lock variable
      field.  If userspace notices contention (the lock is already owned and someone
      else wants to grab it too) then the lock is marked with a value that says
      "there's a waiter pending", and the sys_futex(FUTEX_WAIT) syscall is used to
      wait for the other guy to release it.  The kernel creates a 'futex queue'
      internally, so that it can later on match up the waiter with the waker -
      without them having to know about each other.  When the owner thread releases
      the futex, it notices (via the variable value) that there were waiter(s)
      pending, and does the sys_futex(FUTEX_WAKE) syscall to wake them up.  Once all
      waiters have taken and released the lock, the futex is again back to
      'uncontended' state, and there's no in-kernel state associated with it.  The
      kernel completely forgets that there ever was a futex at that address.  This
      method makes futexes very lightweight and scalable.
      
      "Robustness" is about dealing with crashes while holding a lock: if a process
      exits prematurely while holding a pthread_mutex_t lock that is also shared
      with some other process (e.g.  yum segfaults while holding a pthread_mutex_t,
      or yum is kill -9-ed), then waiters for that lock need to be notified that the
      last owner of the lock exited in some irregular way.
      
      To solve such types of problems, "robust mutex" userspace APIs were created:
      pthread_mutex_lock() returns an error value if the owner exits prematurely -
      and the new owner can decide whether the data protected by the lock can be
      recovered safely.
      
      There is a big conceptual problem with futex based mutexes though: it is the
      kernel that destroys the owner task (e.g.  due to a SEGFAULT), but the kernel
      cannot help with the cleanup: if there is no 'futex queue' (and in most cases
      there is none, futexes being fast lightweight locks) then the kernel has no
      information to clean up after the held lock!  Userspace has no chance to clean
      up after the lock either - userspace is the one that crashes, so it has no
      opportunity to clean up.  Catch-22.
      
      In practice, when e.g.  yum is kill -9-ed (or segfaults), a system reboot is
      needed to release that futex based lock.  This is one of the leading
      bugreports against yum.
      
      To solve this problem, 'Robust Futex' patches were created and presented on
      lkml: the one written by Todd Kneisel and David Singleton is the most advanced
      at the moment.  These patches all tried to extend the futex abstraction by
      registering futex-based locks in the kernel - and thus give the kernel a
      chance to clean up.
      
      E.g.  in David Singleton's robust-futex-6.patch, there are 3 new syscall
      variants to sys_futex(): FUTEX_REGISTER, FUTEX_DEREGISTER and FUTEX_RECOVER.
      The kernel attaches such robust futexes to vmas (via
      vma->vm_file->f_mapping->robust_head), and at do_exit() time, all vmas are
      searched to see whether they have a robust_head set.
      
      Lots of work went into the vma-based robust-futex patch, and recently it has
      improved significantly, but unfortunately it still has two fundamental
      problems left:
      
       - they have quite complex locking and race scenarios.  The vma-based
         patches had been pending for years, but they are still not completely
         reliable.
      
       - they have to scan _every_ vma at sys_exit() time, per thread!
      
      The second disadvantage is a real killer: pthread_exit() takes around 1
      microsecond on Linux, but with thousands (or tens of thousands) of vmas every
      pthread_exit() takes a millisecond or more, also totally destroying the CPU's
      L1 and L2 caches!
      
      This is very much noticeable even for normal process sys_exit_group() calls:
      the kernel has to do the vma scanning unconditionally!  (this is because the
      kernel has no knowledge about how many robust futexes there are to be cleaned
      up, because a robust futex might have been registered in another task, and the
      futex variable might have been simply mmap()-ed into this process's address
      space).
      
      This huge overhead forced the creation of CONFIG_FUTEX_ROBUST, but worse than
      that: the overhead makes robust futexes impractical for any type of generic
      Linux distribution.
      
      So it became clear to us, something had to be done.  Last week, when Thomas
      Gleixner tried to fix up the vma-based robust futex patch in the -rt tree, he
      found a handful of new races and we were talking about it and were analyzing
      the situation.  At that point a fundamentally different solution occured to
      me.  This patchset (written in the past couple of days) implements that new
      solution.  Be warned though - the patchset does things we normally dont do in
      Linux, so some might find the approach disturbing.  Parental advice
      recommended ;-)
      
        New approach to robust futexes
        ------------------------------
      
      At the heart of this new approach there is a per-thread private list of robust
      locks that userspace is holding (maintained by glibc) - which userspace list
      is registered with the kernel via a new syscall [this registration happens at
      most once per thread lifetime].  At do_exit() time, the kernel checks this
      user-space list: are there any robust futex locks to be cleaned up?
      
      In the common case, at do_exit() time, there is no list registered, so the
      cost of robust futexes is just a simple current->robust_list != NULL
      comparison.  If the thread has registered a list, then normally the list is
      empty.  If the thread/process crashed or terminated in some incorrect way then
      the list might be non-empty: in this case the kernel carefully walks the list
      [not trusting it], and marks all locks that are owned by this thread with the
      FUTEX_OWNER_DEAD bit, and wakes up one waiter (if any).
      
      The list is guaranteed to be private and per-thread, so it's lockless.  There
      is one race possible though: since adding to and removing from the list is
      done after the futex is acquired by glibc, there is a few instructions window
      for the thread (or process) to die there, leaving the futex hung.  To protect
      against this possibility, userspace (glibc) also maintains a simple per-thread
      'list_op_pending' field, to allow the kernel to clean up if the thread dies
      after acquiring the lock, but just before it could have added itself to the
      list.  Glibc sets this list_op_pending field before it tries to acquire the
      futex, and clears it after the list-add (or list-remove) has finished.
      
      That's all that is needed - all the rest of robust-futex cleanup is done in
      userspace [just like with the previous patches].
      
      Ulrich Drepper has implemented the necessary glibc support for this new
      mechanism, which fully enables robust mutexes.  (Ulrich plans to commit these
      changes to glibc-HEAD later today.)
      
      Key differences of this userspace-list based approach, compared to the vma
      based method:
      
       - it's much, much faster: at thread exit time, there's no need to loop
         over every vma (!), which the VM-based method has to do.  Only a very
         simple 'is the list empty' op is done.
      
       - no VM changes are needed - 'struct address_space' is left alone.
      
       - no registration of individual locks is needed: robust mutexes dont need
         any extra per-lock syscalls.  Robust mutexes thus become a very lightweight
         primitive - so they dont force the application designer to do a hard choice
         between performance and robustness - robust mutexes are just as fast.
      
       - no per-lock kernel allocation happens.
      
       - no resource limits are needed.
      
       - no kernel-space recovery call (FUTEX_RECOVER) is needed.
      
       - the implementation and the locking is "obvious", and there are no
         interactions with the VM.
      
        Performance
        -----------
      
      I have benchmarked the time needed for the kernel to process a list of 1
      million (!) held locks, using the new method [on a 2GHz CPU]:
      
       - with FUTEX_WAIT set [contended mutex]: 130 msecs
       - without FUTEX_WAIT set [uncontended mutex]: 30 msecs
      
      I have also measured an approach where glibc does the lock notification [which
      it currently does for !pshared robust mutexes], and that took 256 msecs -
      clearly slower, due to the 1 million FUTEX_WAKE syscalls userspace had to do.
      
      (1 million held locks are unheard of - we expect at most a handful of locks to
      be held at a time.  Nevertheless it's nice to know that this approach scales
      nicely.)
      
        Implementation details
        ----------------------
      
      The patch adds two new syscalls: one to register the userspace list, and one
      to query the registered list pointer:
      
       asmlinkage long
       sys_set_robust_list(struct robust_list_head __user *head,
                           size_t len);
      
       asmlinkage long
       sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr,
                           size_t __user *len_ptr);
      
      List registration is very fast: the pointer is simply stored in
      current->robust_list.  [Note that in the future, if robust futexes become
      widespread, we could extend sys_clone() to register a robust-list head for new
      threads, without the need of another syscall.]
      
      So there is virtually zero overhead for tasks not using robust futexes, and
      even for robust futex users, there is only one extra syscall per thread
      lifetime, and the cleanup operation, if it happens, is fast and
      straightforward.  The kernel doesnt have any internal distinction between
      robust and normal futexes.
      
      If a futex is found to be held at exit time, the kernel sets the highest bit
      of the futex word:
      
      	#define FUTEX_OWNER_DIED        0x40000000
      
      and wakes up the next futex waiter (if any). User-space does the rest of
      the cleanup.
      
      Otherwise, robust futexes are acquired by glibc by putting the TID into the
      futex field atomically.  Waiters set the FUTEX_WAITERS bit:
      
      	#define FUTEX_WAITERS           0x80000000
      
      and the remaining bits are for the TID.
      
        Testing, architecture support
        -----------------------------
      
      I've tested the new syscalls on x86 and x86_64, and have made sure the parsing
      of the userspace list is robust [ ;-) ] even if the list is deliberately
      corrupted.
      
      i386 and x86_64 syscalls are wired up at the moment, and Ulrich has tested the
      new glibc code (on x86_64 and i386), and it works for his robust-mutex
      testcases.
      
      All other architectures should build just fine too - but they wont have the
      new syscalls yet.
      
      Architectures need to implement the new futex_atomic_cmpxchg_inuser() inline
      function before writing up the syscalls (that function returns -ENOSYS right
      now).
      
      This patch:
      
      Add placeholder futex_atomic_cmpxchg_inuser() implementations to every
      architecture that supports futexes.  It returns -ENOSYS.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NArjan van de Ven <arjan@infradead.org>
      Acked-by: NUlrich Drepper <drepper@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e9056f13
    • K
      [PATCH] unify pfn_to_page: powerpc pfn_to_page · 659e3505
      KAMEZAWA Hiroyuki 提交于
      PowerPC can use generic ones.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      659e3505
  15. 27 3月, 2006 5 次提交
    • P
      powerpc: Fix event-scan code for 32-bit CHRP · 9618edab
      Paul Mackerras 提交于
      On CHRP machines we are supposed to call into firmware (RTAS)
      periodically, to give it a chance to check for errors and other
      events.  Under ppc we had some special code in timer_interrupt
      to do this, but that didn't get transferred over to arch/powerpc.
      Instead, we use an array of timer_list structs, one per CPU,
      and use add_timer_on to make sure each one gets called on the
      appropriate CPU.
      
      With this we can remove the heartbeat_* elements of the ppc_md
      struct.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      9618edab
    • P
      powerpc: Simplify pSeries idle loop · fbd7740f
      Paul Mackerras 提交于
      Since pSeries only wants to do something different in the idle loop when
      there is no work to do, we can simplify the code by implementing
      ppc_md.power_save functions instead of complete idle loops.  There are
      two versions: one for shared-processor partitions and one for dedicated-
      processor partitions.
      
      With this we also do a cede_processor() call on dedicated processor
      partitions if the poll_pending() call indicates that the hypervisor
      has work it wants to do.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      fbd7740f
    • P
      powerpc: Unify the 32 and 64 bit idle loops · a0652fc9
      Paul Mackerras 提交于
      This unifies the 32-bit (ARCH=ppc and ARCH=powerpc) and 64-bit idle
      loops.  It brings over the concept of having a ppc_md.power_save
      function from 32-bit to ARCH=powerpc, which lets us get rid of
      native_idle().  With this we will also be able to simplify the idle
      handling for pSeries and cell.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a0652fc9
    • A
      [PATCH] powerpc: Allow non zero boot cpuids · 4df20460
      Anton Blanchard 提交于
      We currently have a hack to flip the boot cpu and its secondary thread
      to logical cpuid 0 and 1. This means the logical - physical mapping will
      differ depending on which cpu is boot cpu. This is most apparent on
      kexec, where we might kexec on any cpu and therefore change the mapping
      from boot to boot.
      
      The patch below does a first pass early on to work out the logical cpuid
      of the boot thread. We then fix up some paca structures to match.
      
      Ive also removed the boot_cpuid_phys variable for ppc64, to be
      consistent we use get_hard_smp_processor_id(boot_cpuid) everywhere.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      4df20460
    • M
      [PATCH] spufs: enable SPE problem state MMIO access. · 6df10a82
      Mark Nutter 提交于
      This patch is layered on top of CONFIG_SPARSEMEM
      and is patterned after direct mapping of LS.
      
      This patch allows mmap() of the following regions:
      "mfc", which represents the area from [0x3000 - 0x3fff];
      "cntl", which represents the area from [0x4000 - 0x4fff];
      "signal1" which begins at offset 0x14000; "signal2" which
      begins at offset 0x1c000.
      
      The signal1 & signal2 files may be mmap()'d by regular user
      processes.  The cntl and mfc file, on the other hand, may
      only be accessed if the owning process has CAP_SYS_RAWIO,
      because they have the potential to confuse the kernel
      with regard to parallel access to the same files with
      regular file operations: the kernel always holds a spinlock
      when accessing registers in these areas to serialize them,
      which can not be guaranteed with user mmaps,
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      6df10a82