1. 07 5月, 2008 3 次提交
  2. 09 5月, 2008 28 次提交
  3. 08 5月, 2008 9 次提交
    • M
      sched: fix weight calculations · 46151122
      Mike Galbraith 提交于
      The conversion between virtual and real time is as follows:
      
        dvt = rw/w * dt <=> dt = w/rw * dvt
      
      Since we want the fair sleeper granularity to be in real time, we actually
      need to do:
      
        dvt = - rw/w * l
      
      This bug could be related to the regression reported by Yanmin Zhang:
      
      | Comparing with kernel 2.6.25, sysbench+mysql(oltp, readonly) has lots
      | of regressions with 2.6.26-rc1:
      |
      | 1) 8-core stoakley: 28%;
      | 2) 16-core tigerton: 20%;
      | 3) Itanium Montvale: 50%.
      Reported-by: N"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      46151122
    • I
      semaphore: fix · bf726eab
      Ingo Molnar 提交于
      Yanmin Zhang reported:
      
      | Comparing with kernel 2.6.25, AIM7 (use tmpfs) has more th
      | regression under 2.6.26-rc1 on my 8-core stoakley, 16-core tigerton,
      | and Itanium Montecito. Bisect located the patch below:
      |
      | 64ac24e7 is first bad commit
      | commit 64ac24e7
      | Author: Matthew Wilcox <matthew@wil.cx>
      | Date:   Fri Mar 7 21:55:58 2008 -0500
      |
      |     Generic semaphore implementation
      |
      | After I manually reverted the patch against 2.6.26-rc1 while fixing
      | lots of conflicts/errors, aim7 regression became less than 2%.
      
      i reproduced the AIM7 workload and can confirm Yanmin's findings that
      -.26-rc1 regresses over .25 - by over 67% here.
      
      Looking at the workload i found and fixed what i believe to be the real
      bug causing the AIM7 regression: it was inefficient wakeup / scheduling
      / locking behavior of the new generic semaphore code, causing suboptimal
      performance.
      
      The problem comes from the following code. The new semaphore code does
      this on down():
      
              spin_lock_irqsave(&sem->lock, flags);
              if (likely(sem->count > 0))
                      sem->count--;
              else
                      __down(sem);
              spin_unlock_irqrestore(&sem->lock, flags);
      
      and this on up():
      
              spin_lock_irqsave(&sem->lock, flags);
              if (likely(list_empty(&sem->wait_list)))
                      sem->count++;
              else
                      __up(sem);
              spin_unlock_irqrestore(&sem->lock, flags);
      
      where __up() does:
      
              list_del(&waiter->list);
              waiter->up = 1;
              wake_up_process(waiter->task);
      
      and where __down() does this in essence:
      
              list_add_tail(&waiter.list, &sem->wait_list);
              waiter.task = task;
              waiter.up = 0;
              for (;;) {
                      [...]
                      spin_unlock_irq(&sem->lock);
                      timeout = schedule_timeout(timeout);
                      spin_lock_irq(&sem->lock);
                      if (waiter.up)
                              return 0;
              }
      
      the fastpath looks good and obvious, but note the following property of
      the contended path: if there's a task on the ->wait_list, the up() of
      the current owner will "pass over" ownership to that waiting task, in a
      wake-one manner, via the waiter->up flag and by removing the waiter from
      the wait list.
      
      That is all and fine in principle, but as implemented in
      kernel/semaphore.c it also creates a nasty, hidden source of contention!
      
      The contention comes from the following property of the new semaphore
      code: the new owner owns the semaphore exclusively, even if it is not
      running yet.
      
      So if the old owner, even if just a few instructions later, does a
      down() [lock_kernel()] again, it will be blocked and will have to wait
      on the new owner to eventually be scheduled (possibly on another CPU)!
      Or if another task gets to lock_kernel() sooner than the "new owner"
      scheduled, it will be blocked unnecessarily and for a very long time
      when there are 2000 tasks running.
      
      I.e. the implementation of the new semaphores code does wake-one and
      lock ownership in a very restrictive way - it does not allow
      opportunistic re-locking of the lock at all and keeps the scheduler from
      picking task order intelligently.
      
      This kind of scheduling, with 2000 AIM7 processes running, creates awful
      cross-scheduling between those 2000 tasks, causes reduced parallelism, a
      throttled runqueue length and a lot of idle time. With increasing number
      of CPUs it causes an exponentially worse behavior in AIM7, as the chance
      for a newly woken new-owner task to actually run anytime soon is less
      and less likely.
      
      Note that it takes just a tiny bit of contention for the 'new-semaphore
      catastrophy' to happen: the wakeup latencies get added to whatever small
      contention there is, and quickly snowball out of control!
      
      I believe Yanmin's findings and numbers support this analysis too.
      
      The best fix for this problem is to use the same scheduling logic that
      the kernel/mutex.c code uses: keep the wake-one behavior (that is OK and
      wanted because we do not want to over-schedule), but also allow
      opportunistic locking of the lock even if a wakee is already "in
      flight".
      
      The patch below implements this new logic. With this patch applied the
      AIM7 regression is largely fixed on my quad testbox:
      
        # v2.6.25 vanilla:
        ..................
        Tasks   Jobs/Min        JTI     Real    CPU     Jobs/sec/task
        2000    56096.4         91      207.5   789.7   0.4675
        2000    55894.4         94      208.2   792.7   0.4658
      
        # v2.6.26-rc1-166-gc0a18111 vanilla:
        ...................................
        Tasks   Jobs/Min        JTI     Real    CPU     Jobs/sec/task
        2000    33230.6         83      350.3   784.5   0.2769
        2000    31778.1         86      366.3   783.6   0.2648
      
        # v2.6.26-rc1-166-gc0a18111 + semaphore-speedup:
        ...............................................
        Tasks   Jobs/Min        JTI     Real    CPU     Jobs/sec/task
        2000    55707.1         92      209.0   795.6   0.4642
        2000    55704.4         96      209.0   796.0   0.4642
      
      i.e. a 67% speedup. We are now back to within 1% of the v2.6.25
      performance levels and have zero idle time during the test, as expected.
      
      Btw., interactivity also improved dramatically with the fix - for
      example console-switching became almost instantaneous during this
      workload (which after all is running 2000 tasks at once!), without the
      patch it was stuck for a minute at times.
      
      There's another nice side-effect of this speedup patch, the new generic
      semaphore code got even smaller:
      
         text    data     bss     dec     hex filename
         1241       0       0    1241     4d9 semaphore.o.before
         1207       0       0    1207     4b7 semaphore.o.after
      
      (because the waiter.up complication got removed.)
      
      Longer-term we should look into using the mutex code for the generic
      semaphore code as well - but i's not easy due to legacies and it's
      outside of the scope of v2.6.26 and outside the scope of this patch as
      well.
      Bisected-by: N"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bf726eab
    • J
      Revert "relay: fix splice problem" · 75065ff6
      Jens Axboe 提交于
      This reverts commit c3270e57.
      75065ff6
    • P
      [ALSA] soc at91 minor bug fixes · e3a2efa6
      Patrik Sevallius 提交于
      Found these two bugs while browsing through the code.  The first one is
      a cut-n-paste bug, instead of disabling the clock when request_irq()
      fails, it enabled it once more.  The second one fixes a debug printout,
      AT91_SSC_IER is write only, AT91_SSC_IMR is readable (the printed string
      actually says imr).
      
      Frank Mandarino was busy so he asked me to send these to this list.
      
      /Patrik
      Signed-off-by: NPatrik Sevallius <patrik.sevallius@enea.com>
      Acked-by: NFrank Mandarino <fmandarino@endrelia.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      e3a2efa6
    • M
      [ALSA] soc - at91-pcm - Fix line wrapping · 30a717f7
      Mark Brown 提交于
      There's more checkpatch stuff to fix in the driver, this just fixes the
      minimum required for the following patch to be clean.
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      30a717f7
    • B
      net: Added ASSERT_RTNL() to dev_open() and dev_close(). · e46b66bc
      Ben Hutchings 提交于
      dev_open() and dev_close() must be called holding the RTNL, since they
      call device functions and netdevice notifiers that are promised the RTNL.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e46b66bc
    • O
      can: Fix can_send() handling on dev_queue_xmit() failures · c2ab7ac2
      Oliver Hartkopp 提交于
      The tx packet counting and the local loopback of CAN frames should
      only happen in the case that the CAN frame has been enqueued to the
      netdevice tx queue successfully.
      
      Thanks to Andre Naujoks <nautsch@gmail.com> for reporting this issue.
      Signed-off-by: NOliver Hartkopp <oliver@hartkopp.net>
      Signed-off-by: NUrs Thuermann <urs@isnogud.escape.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c2ab7ac2
    • D
    • P
      netns: Fix arbitrary net_device-s corruptions on net_ns stop. · aca51397
      Pavel Emelyanov 提交于
      When a net namespace is destroyed, some devices (those, not killed
      on ns stop explicitly) are moved back to init_net.
      
      The problem, is that this net_ns change has one point of failure -
      the __dev_alloc_name() may be called if a name collision occurs (and
      this is easy to trigger). This allocator performs a likely-to-fail
      GFP_ATOMIC allocation to find a suitable number. Other possible 
      conditions that may cause error (for device being ns local or not
      registered) are always false in this case.
      
      So, when this call fails, the device is unregistered. But this is
      *not* the right thing to do, since after this the device may be
      released (and kfree-ed) improperly. E. g. bridges require more
      actions (sysfs update, timer disarming, etc.), some other devices 
      want to remove their private areas from lists, etc.
      
      I. e. arbitrary use-after-free cases may occur.
      
      The proposed fix is the following: since the only reason for the
      dev_change_net_namespace to fail is the name generation, we may
      give it a unique fall-back name w/o %d-s in it - the dev<ifindex>
      one, since ifindexes are still unique.
      
      So make this change, raise the failure-case printk loglevel to 
      EMERG and replace the unregister_netdevice call with BUG().
      
      [ Use snprintf() -DaveM ]
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aca51397