1. 28 1月, 2008 2 次提交
  2. 26 1月, 2008 19 次提交
    • A
      sched: keep total / count stats in addition to the max for · 6d082592
      Arjan van de Ven 提交于
      Right now, the linux kernel (with scheduler statistics enabled) keeps track
      of the maximum time a process is waiting to be scheduled. While the maximum
      is a very useful metric, tracking average and total is equally useful
      (at least for latencytop) to figure out the accumulated effect of scheduler
      delays. The accumulated effect is important to judge the performance impact
      of scheduler tuning/behavior.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6d082592
    • A
      sched, futex: detach sched.h and futex.h · 286100a6
      Alexey Dobriyan 提交于
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      286100a6
    • I
      softlockup: fix signedness · 90739081
      Ingo Molnar 提交于
      fix softlockup tunables signedness.
      
      mark tunables read-mostly.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      90739081
    • A
      sched: latencytop support · 9745512c
      Arjan van de Ven 提交于
      LatencyTOP kernel infrastructure; it measures latencies in the
      scheduler and tracks it system wide and per process.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9745512c
    • P
      sched: rt throttling vs no_hz · 48d5e258
      Peter Zijlstra 提交于
      We need to teach no_hz about the rt throttling because its tick driven.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48d5e258
    • P
      sched: rt group scheduling · 6f505b16
      Peter Zijlstra 提交于
      Extend group scheduling to also cover the realtime classes. It uses the time
      limiting introduced by the previous patch to allow multiple realtime groups.
      
      The hard time limit is required to keep behaviour deterministic.
      
      The algorithms used make the realtime scheduler O(tg), linear scaling wrt the
      number of task groups. This is the worst case behaviour I can't seem to get out
      of, the avg. case of the algorithms can be improved, I focused on correctness
      and worst case.
      
      [ akpm@linux-foundation.org: move side-effects out of BUG_ON(). ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6f505b16
    • P
      sched: rt time limit · fa85ae24
      Peter Zijlstra 提交于
      Very simple time limit on the realtime scheduling classes.
      Allow the rq's realtime class to consume sched_rt_ratio of every
      sched_rt_period slice. If the class exceeds this quota the fair class
      will preempt the realtime class.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa85ae24
    • P
      sched: high-res preemption tick · 8f4d37ec
      Peter Zijlstra 提交于
      Use HR-timers (when available) to deliver an accurate preemption tick.
      
      The regular scheduler tick that runs at 1/HZ can be too coarse when nice
      level are used. The fairness system will still keep the cpu utilisation 'fair'
      by then delaying the task that got an excessive amount of CPU time but try to
      minimize this by delivering preemption points spot-on.
      
      The average frequency of this extra interrupt is sched_latency / nr_latency.
      Which need not be higher than 1/HZ, its just that the distribution within the
      sched_latency period is important.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8f4d37ec
    • H
      sched: do not do cond_resched() when CONFIG_PREEMPT · 02b67cc3
      Herbert Xu 提交于
      Why do we even have cond_resched when real preemption
      is on? It seems to be a waste of space and time.
      
      remove cond_resched with CONFIG_PREEMPT on.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      02b67cc3
    • P
      sched: SCHED_FIFO/SCHED_RR watchdog timer · 78f2c7db
      Peter Zijlstra 提交于
      Introduce a new rlimit that allows the user to set a runtime timeout on
      real-time tasks their slice. Once this limit is exceeded the task will receive
      SIGXCPU.
      
      So it measures runtime since the last sleep.
      
      Input and ideas by Thomas Gleixner and Lennart Poettering.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      CC: Lennart Poettering <mzxreary@0pointer.de>
      CC: Michael Kerrisk <mtk.manpages@googlemail.com>
      CC: Ulrich Drepper <drepper@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      78f2c7db
    • P
      sched: sched_rt_entity · fa717060
      Peter Zijlstra 提交于
      Move the task_struct members specific to rt scheduling together.
      A future optimization could be to put sched_entity and sched_rt_entity
      into a union.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      CC: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa717060
    • P
      Preempt-RCU: implementation · e260be67
      Paul E. McKenney 提交于
      This patch implements a new version of RCU which allows its read-side
      critical sections to be preempted. It uses a set of counter pairs
      to keep track of the read-side critical sections and flips them
      when all tasks exit read-side critical section. The details
      of this implementation can be found in this paper -
      
      	http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf
      
      and the article-
      
      	http://lwn.net/Articles/253651/
      
      This patch was developed as a part of the -rt kernel development and
      meant to provide better latencies when read-side critical sections of
      RCU don't disable preemption.  As a consequence of keeping track of RCU
      readers, the readers have a slight overhead (optimizations in the paper).
      This implementation co-exists with the "classic" RCU implementations
      and can be switched to at compiler.
      
      Also includes RCU tracing summarized in debugfs.
      
      [ akpm@linux-foundation.org: build fixes on non-preempt architectures ]
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NDipankar Sarma <dipankar@in.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@us.ibm.com>
      Reviewed-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e260be67
    • S
      sched: RT-balance, add new methods to sched_class · cb469845
      Steven Rostedt 提交于
      Dmitry Adamushko found that the current implementation of the RT
      balancing code left out changes to the sched_setscheduler and
      rt_mutex_setprio.
      
      This patch addresses this issue by adding methods to the schedule classes
      to handle being switched out of (switched_from) and being switched into
      (switched_to) a sched_class. Also a method for changing of priorities
      is also added (prio_changed).
      
      This patch also removes some duplicate logic between rt_mutex_setprio and
      sched_setscheduler.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cb469845
    • S
      sched: RT-balance, replace hooks with pre/post schedule and wakeup methods · 9a897c5a
      Steven Rostedt 提交于
      To make the main sched.c code more agnostic to the schedule classes.
      Instead of having specific hooks in the schedule code for the RT class
      balancing. They are replaced with a pre_schedule, post_schedule
      and task_wake_up methods. These methods may be used by any of the classes
      but currently, only the sched_rt class implements them.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9a897c5a
    • G
      sched: add sched-domain roots · 57d885fe
      Gregory Haskins 提交于
      We add the notion of a root-domain which will be used later to rescope
      global variables to per-domain variables.  Each exclusive cpuset
      essentially defines an island domain by fully partitioning the member cpus
      from any other cpuset.  However, we currently still maintain some
      policy/state as global variables which transcend all cpusets.  Consider,
      for instance, rt-overload state.
      
      Whenever a new exclusive cpuset is created, we also create a new
      root-domain object and move each cpu member to the root-domain's span.
      By default the system creates a single root-domain with all cpus as
      members (mimicking the global state we have today).
      
      We add some plumbing for storing class specific data in our root-domain.
      Whenever a RQ is switching root-domains (because of repartitioning) we
      give each sched_class the opportunity to remove any state from its old
      domain and add state to the new one.  This logic doesn't have any clients
      yet but it will later in the series.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      CC: Christoph Lameter <clameter@sgi.com>
      CC: Paul Jackson <pj@sgi.com>
      CC: Simon Derr <simon.derr@bull.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57d885fe
    • G
      sched: de-SCHED_OTHER-ize the RT path · e7693a36
      Gregory Haskins 提交于
      The current wake-up code path tries to determine if it can optimize the
      wake-up to "this_cpu" by computing load calculations.  The problem is that
      these calculations are only relevant to SCHED_OTHER tasks where load is king.
      For RT tasks, priority is king.  So the load calculation is completely wasted
      bandwidth.
      
      Therefore, we create a new sched_class interface to help with
      pre-wakeup routing decisions and move the load calculation as a function
      of CFS task's class.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7693a36
    • G
      sched: add RT-balance cpu-weight · 73fe6aae
      Gregory Haskins 提交于
      Some RT tasks (particularly kthreads) are bound to one specific CPU.
      It is fairly common for two or more bound tasks to get queued up at the
      same time.  Consider, for instance, softirq_timer and softirq_sched.  A
      timer goes off in an ISR which schedules softirq_thread to run at RT50.
      Then the timer handler determines that it's time to smp-rebalance the
      system so it schedules softirq_sched to run.  So we are in a situation
      where we have two RT50 tasks queued, and the system will go into
      rt-overload condition to request other CPUs for help.
      
      This causes two problems in the current code:
      
      1) If a high-priority bound task and a low-priority unbounded task queue
         up behind the running task, we will fail to ever relocate the unbounded
         task because we terminate the search on the first unmovable task.
      
      2) We spend precious futile cycles in the fast-path trying to pull
         overloaded tasks over.  It is therefore optimial to strive to avoid the
         overhead all together if we can cheaply detect the condition before
         overload even occurs.
      
      This patch tries to achieve this optimization by utilizing the hamming
      weight of the task->cpus_allowed mask.  A weight of 1 indicates that
      the task cannot be migrated.  We will then utilize this information to
      skip non-migratable tasks and to eliminate uncessary rebalance attempts.
      
      We introduce a per-rq variable to count the number of migratable tasks
      that are currently running.  We only go into overload if we have more
      than one rt task, AND at least one of them is migratable.
      
      In addition, we introduce a per-task variable to cache the cpus_allowed
      weight, since the hamming calculation is probably relatively expensive.
      We only update the cached value when the mask is updated which should be
      relatively infrequent, especially compared to scheduling frequency
      in the fast path.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      73fe6aae
    • I
      softlockup: automatically detect hung TASK_UNINTERRUPTIBLE tasks · 82a1fcb9
      Ingo Molnar 提交于
      this patch extends the soft-lockup detector to automatically
      detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are
      printed the following way:
      
       ------------------>
       INFO: task prctl:3042 blocked for more than 120 seconds.
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message
       prctl         D fd5e3793     0  3042   2997
              f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286
              f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000
              f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b
       Call Trace:
        [<c04883a5>] schedule_timeout+0x6d/0x8b
        [<c04883d8>] schedule_timeout_uninterruptible+0x15/0x17
        [<c0133a76>] msleep+0x10/0x16
        [<c0138974>] sys_prctl+0x30/0x1e2
        [<c0104c52>] sysenter_past_esp+0x5f/0xa5
        =======================
       2 locks held by prctl/3042:
       #0:  (&sb->s_type->i_mutex_key#5){--..}, at: [<c0197d11>] do_fsync+0x38/0x7a
       #1:  (jbd_handle){--..}, at: [<c01ca3d2>] journal_start+0xc7/0xe9
       <------------------
      
      the current default timeout is 120 seconds. Such messages are printed
      up to 10 times per bootup. If the system has crashed already then the
      messages are not printed.
      
      if lockdep is enabled then all held locks are printed as well.
      
      this feature is a natural extension to the softlockup-detector (kernel
      locked up without scheduling) and to the NMI watchdog (kernel locked up
      with IRQs disabled).
      
      [ Gautham R Shenoy <ego@in.ibm.com>: CPU hotplug fixes. ]
      [ Andrew Morton <akpm@linux-foundation.org>: build warning fix. ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      82a1fcb9
    • S
      sched: group scheduler, fix fairness of cpu bandwidth allocation for task groups · 6b2d7700
      Srivatsa Vaddagiri 提交于
      The current load balancing scheme isn't good enough for precise
      group fairness.
      
      For example: on a 8-cpu system, I created 3 groups as under:
      
      	a = 8 tasks (cpu.shares = 1024)
      	b = 4 tasks (cpu.shares = 1024)
      	c = 3 tasks (cpu.shares = 1024)
      
      a, b and c are task groups that have equal weight. We would expect each
      of the groups to receive 33.33% of cpu bandwidth under a fair scheduler.
      
      This is what I get with the latest scheduler git tree:
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------------------
      Col1  | Col2    | Col3  |  Col4
      ------|---------|-------|-------------------------------------------------------
      a     | 277.676 | 57.8% | 54.1%  54.1%  54.1%  54.2%  56.7%  62.2%  62.8% 64.5%
      b     | 116.108 | 24.2% | 47.4%  48.1%  48.7%  49.3%
      c     |  86.326 | 18.0% | 47.5%  47.9%  48.5%
      --------------------------------------------------------------------------------
      
      Explanation of o/p:
      
      Col1 -> Group name
      Col2 -> Cumulative execution time (in seconds) received by all tasks of that
      	group in a 60sec window across 8 cpus
      Col3 -> CPU bandwidth received by the group in the 60sec window, expressed in
              percentage. Col3 data is derived as:
      		Col3 = 100 * Col2 / (NR_CPUS * 60)
      Col4 -> CPU bandwidth received by each individual task of the group.
      		Col4 = 100 * cpu_time_recd_by_task / 60
      
      [I can share the test case that produces a similar o/p if reqd]
      
      The deviation from desired group fairness is as below:
      
      	a = +24.47%
      	b = -9.13%
      	c = -15.33%
      
      which is quite high.
      
      After the patch below is applied, here are the results:
      
      --------------------------------------------------------------------------------
      Col1  | Col2    | Col3  |  Col4
      ------|---------|-------|-------------------------------------------------------
      a     | 163.112 | 34.0% | 33.2%  33.4%  33.5%  33.5%  33.7%  34.4%  34.8% 35.3%
      b     | 156.220 | 32.5% | 63.3%  64.5%  66.1%  66.5%
      c     | 160.653 | 33.5% | 85.8%  90.6%  91.4%
      --------------------------------------------------------------------------------
      
      Deviation from desired group fairness is as below:
      
      	a = +0.67%
      	b = -0.83%
      	c = +0.17%
      
      which is far better IMO. Most of other runs have yielded a deviation within
      +-2% at the most, which is good.
      
      Why do we see bad (group) fairness with current scheuler?
      =========================================================
      
      Currently cpu's weight is just the summation of individual task weights.
      This can yield incorrect results. For ex: consider three groups as below
      on a 2-cpu system:
      
      	CPU0	CPU1
      ---------------------------
      	A (10)  B(5)
      		C(5)
      ---------------------------
      
      Group A has 10 tasks, all on CPU0, Group B and C have 5 tasks each all
      of which are on CPU1. Each task has the same weight (NICE_0_LOAD =
      1024).
      
      The current scheme would yield a cpu weight of 10240 (10*1024) for each cpu and
      the load balancer will think both CPUs are perfectly balanced and won't
      move around any tasks. This, however, would yield this bandwidth:
      
      	A = 50%
      	B = 25%
      	C = 25%
      
      which is not the desired result.
      
      What's changing in the patch?
      =============================
      
      	- How cpu weights are calculated when CONFIF_FAIR_GROUP_SCHED is
      	  defined (see below)
      	- API Change
      		- Two tunables introduced in sysfs (under SCHED_DEBUG) to
      		  control the frequency at which the load balance monitor
      		  thread runs.
      
      The basic change made in this patch is how cpu weight (rq->load.weight) is
      calculated. Its now calculated as the summation of group weights on a cpu,
      rather than summation of task weights. Weight exerted by a group on a
      cpu is dependent on the shares allocated to it and also the number of
      tasks the group has on that cpu compared to the total number of
      (runnable) tasks the group has in the system.
      
      Let,
      	W(K,i)  = Weight of group K on cpu i
      	T(K,i)  = Task load present in group K's cfs_rq on cpu i
      	T(K)    = Total task load of group K across various cpus
      	S(K) 	= Shares allocated to group K
      	NRCPUS	= Number of online cpus in the scheduler domain to
      	 	  which group K is assigned.
      
      Then,
      	W(K,i) = S(K) * NRCPUS * T(K,i) / T(K)
      
      A load balance monitor thread is created at bootup, which periodically
      runs and adjusts group's weight on each cpu. To avoid its overhead, two
      min/max tunables are introduced (under SCHED_DEBUG) to control the rate
      at which it runs.
      
      Fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl>
      
      - don't start the load_balance_monitor when there is only a single cpu.
      - rename the kthread because its currently longer than TASK_COMM_LEN
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6b2d7700
  3. 25 1月, 2008 1 次提交
    • K
      fix struct user_info export's sysfs interaction · eb41d946
      Kay Sievers 提交于
      Clean up the use of ksets and kobjects. Kobjects are instances of
      objects (like struct user_info), ksets are collections of objects of a
      similar type (like the uids directory containing the user_info directories).
      So, use kobjects for the user_info directories, and a kset for the "uids"
      directory.
      
      On object cleanup, the final kobject_put() was missing.
      
      Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
      Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NKay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      eb41d946
  4. 14 1月, 2008 1 次提交
    • R
      remove task_ppid_nr_ns · 84427eae
      Roland McGrath 提交于
      task_ppid_nr_ns is called in three places.  One of these should never
      have called it.  In the other two, using it broke the existing
      semantics.  This was presumably accidental.  If the function had not
      been there, it would have been much more obvious to the eye that those
      patches were changing the behavior.  We don't need this function.
      
      In task_state, the pid of the ptracer is not the ppid of the ptracer.
      
      In do_task_stat, ppid is the tgid of the real_parent, not its pid.
      I also moved the call outside of lock_task_sighand, since it doesn't
      need it.
      
      In sys_getppid, ppid is the tgid of the real_parent, not its pid.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      84427eae
  5. 28 11月, 2007 1 次提交
  6. 10 11月, 2007 5 次提交
  7. 30 10月, 2007 2 次提交
  8. 26 10月, 2007 1 次提交
  9. 25 10月, 2007 3 次提交
  10. 20 10月, 2007 5 次提交
    • P
      Isolate the explicit usage of signal->pgrp · 9a2e7057
      Pavel Emelyanov 提交于
      The pgrp field is not used widely around the kernel so it is now marked as
      deprecated with appropriate comment.
      
      The initialization of INIT_SIGNALS is trimmed because
      a) they are set to 0 automatically;
      b) gcc cannot properly initialize two anonymous (the second one
         is the one with the session) unions. In this particular case
         to make it compile we'd have to add some field initialized
         right before the .pgrp.
      
      This is the same patch as the 1ec320af one
      (from Cedric), but for the pgrp field.
      
      Some progress report:
      
      We have to deprecate the pid, tgid, session and pgrp fields on struct
      task_struct and struct signal_struct.  The session and pgrp are already
      deprecated.  The tgid value is close to being such - the worst known usage
      in in fs/locks.c and audit code.  The pid field deprecation is mainly
      blocked by numerous printk-s around the kernel that print the tsk->pid to
      log.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Cc: Serge Hallyn <serue@us.ibm.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a2e7057
    • P
      cpuset sched_load_balance flag · 029190c5
      Paul Jackson 提交于
      Add a new per-cpuset flag called 'sched_load_balance'.
      
      When enabled in a cpuset (the default value) it tells the kernel scheduler
      that the scheduler should provide the normal load balancing on the CPUs in
      that cpuset, sometimes moving tasks from one CPU to a second CPU if the
      second CPU is less loaded and if that task is allowed to run there.
      
      When disabled (write "0" to the file) then it tells the kernel scheduler
      that load balancing is not required for the CPUs in that cpuset.
      
      Now even if this flag is disabled for some cpuset, the kernel may still
      have to load balance some or all the CPUs in that cpuset, if some
      overlapping cpuset has its sched_load_balance flag enabled.
      
      If there are some CPUs that are not in any cpuset whose sched_load_balance
      flag is enabled, the kernel scheduler will not load balance tasks to those
      CPUs.
      
      Moreover the kernel will partition the 'sched domains' (non-overlapping
      sets of CPUs over which load balancing is attempted) into the finest
      granularity partition that it can find, while still keeping any two CPUs
      that are in the same shed_load_balance enabled cpuset in the same element
      of the partition.
      
      This serves two purposes:
       1) It provides a mechanism for real time isolation of some CPUs, and
       2) it can be used to improve performance on systems with many CPUs
          by supporting configurations in which load balancing is not done
          across all CPUs at once, but rather only done in several smaller
          disjoint sets of CPUs.
      
      This mechanism replaces the earlier overloading of the per-cpuset
      flag 'cpu_exclusive', which overloading was removed in an earlier
      patch: cpuset-remove-sched-domain-hooks-from-cpusets
      
      See further the Documentation and comments in the code itself.
      
      [akpm@linux-foundation.org: don't be weird]
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      029190c5
    • P
      Uninline the task_xid_nr_ns() calls · 2f2a3a46
      Pavel Emelyanov 提交于
      Since these are expanded into call to pid_nr_ns() anyway, it's OK to move
      the whole routine out-of-line.  This is a cheap way to save ~100 bytes from
      vmlinux.  Together with the previous two patches, it saves half-a-kilo from
      the vmlinux.
      
      Un-inline other (currently inlined) functions must be done with additional
      performance testing.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Paul Menage <menage@google.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f2a3a46
    • P
      Isolate some explicit usage of task->tgid · bac0abd6
      Pavel Emelyanov 提交于
      With pid namespaces this field is now dangerous to use explicitly, so hide
      it behind the helpers.
      
      Also the pid and pgrp fields o task_struct and signal_struct are to be
      deprecated.  Unfortunately this patch cannot be sent right now as this
      leads to tons of warnings, so start isolating them, and deprecate later.
      
      Actually the p->tgid == pid has to be changed to has_group_leader_pid(),
      but Oleg pointed out that in case of posix cpu timers this is the same, and
      thread_group_leader() is more preferable.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Acked-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bac0abd6
    • P
      Uninline find_task_by_xxx set of functions · 228ebcbe
      Pavel Emelyanov 提交于
      The find_task_by_something is a set of macros are used to find task by pid
      depending on what kind of pid is proposed - global or virtual one.  All of
      them are wrappers above the most generic one - find_task_by_pid_type_ns() -
      and just substitute some args for it.
      
      It turned out, that dereferencing the current->nsproxy->pid_ns construction
      and pushing one more argument on the stack inline cause kernel text size to
      grow.
      
      This patch moves all this stuff out-of-line into kernel/pid.c.  Together
      with the next patch it saves a bit less than 400 bytes from the .text
      section.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Paul Menage <menage@google.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      228ebcbe