1. 22 2月, 2011 2 次提交
  2. 26 1月, 2011 1 次提交
  3. 12 1月, 2011 1 次提交
  4. 08 1月, 2011 1 次提交
  5. 03 1月, 2011 2 次提交
  6. 18 12月, 2010 1 次提交
  7. 14 12月, 2010 1 次提交
  8. 10 12月, 2010 1 次提交
  9. 30 11月, 2010 1 次提交
    • M
      sched: Add 'autogroup' scheduling feature: automated per session task groups · 5091faa4
      Mike Galbraith 提交于
      A recurring complaint from CFS users is that parallel kbuild has
      a negative impact on desktop interactivity.  This patch
      implements an idea from Linus, to automatically create task
      groups.  Currently, only per session autogroups are implemented,
      but the patch leaves the way open for enhancement.
      
      Implementation: each task's signal struct contains an inherited
      pointer to a refcounted autogroup struct containing a task group
      pointer, the default for all tasks pointing to the
      init_task_group.  When a task calls setsid(), a new task group
      is created, the process is moved into the new task group, and a
      reference to the preveious task group is dropped.  Child
      processes inherit this task group thereafter, and increase it's
      refcount.  When the last thread of a process exits, the
      process's reference is dropped, such that when the last process
      referencing an autogroup exits, the autogroup is destroyed.
      
      At runqueue selection time, IFF a task has no cgroup assignment,
      its current autogroup is used.
      
      Autogroup bandwidth is controllable via setting it's nice level
      through the proc filesystem:
      
        cat /proc/<pid>/autogroup
      
      Displays the task's group and the group's nice level.
      
        echo <nice level> > /proc/<pid>/autogroup
      
      Sets the task group's shares to the weight of nice <level> task.
      Setting nice level is rate limited for !admin users due to the
      abuse risk of task group locking.
      
      The feature is enabled from boot by default if
      CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
      the boot option noautogroup, and can also be turned on/off on
      the fly via:
      
        echo [01] > /proc/sys/kernel/sched_autogroup_enabled
      
      ... which will automatically move tasks to/from the root task group.
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      [ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5091faa4
  10. 29 11月, 2010 1 次提交
  11. 25 11月, 2010 1 次提交
    • M
      cgroups: make swap accounting default behavior configurable · a42c390c
      Michal Hocko 提交于
      Swap accounting can be configured by CONFIG_CGROUP_MEM_RES_CTLR_SWAP
      configuration option and then it is turned on by default.  There is a boot
      option (noswapaccount) which can disable this feature.
      
      This makes it hard for distributors to enable the configuration option as
      this feature leads to a bigger memory consumption and this is a no-go for
      general purpose distribution kernel.  On the other hand swap accounting
      may be very usuful for some workloads.
      
      This patch adds a new configuration option which controls the default
      behavior (CGROUP_MEM_RES_CTLR_SWAP_ENABLED).  If the option is selected
      then the feature is turned on by default.
      
      It also adds a new boot parameter swapaccount[=1|0] which enhances the
      original noswapaccount parameter semantic by means of enable/disable logic
      (defaults to 1 if no value is provided to be still consistent with
      noswapaccount).
      
      The default behavior is unchanged (if CONFIG_CGROUP_MEM_RES_CTLR_SWAP is
      enabled then CONFIG_CGROUP_MEM_RES_CTLR_SWAP_ENABLED is enabled as well)
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a42c390c
  12. 11 11月, 2010 1 次提交
  13. 27 10月, 2010 1 次提交
    • B
      resources: support allocating space within a region from the top down · e7f8567d
      Bjorn Helgaas 提交于
      Allocate space from the top of a region first, then work downward,
      if an architecture desires this.
      
      When we allocate space from a resource, we look for gaps between children
      of the resource.  Previously, we always looked at gaps from the bottom up.
      For example, given this:
      
          [mem 0xbff00000-0xf7ffffff] PCI Bus 0000:00
            [mem 0xbff00000-0xbfffffff] gap -- available
            [mem 0xc0000000-0xdfffffff] PCI Bus 0000:02
            [mem 0xe0000000-0xf7ffffff] gap -- available
      
      we attempted to allocate from the [mem 0xbff00000-0xbfffffff] gap first,
      then the [mem 0xe0000000-0xf7ffffff] gap.
      
      With this patch an architecture can choose to allocate from the top gap
      [mem 0xe0000000-0xf7ffffff] first.
      
      We can't do this across the board because iomem_resource.end is initialized
      to 0xffffffff_ffffffff on 64-bit architectures, and most machines can't
      address the entire 64-bit physical address space.  Therefore, we only
      allocate top-down if the arch requests it by clearing
      "resource_alloc_from_bottom".
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      e7f8567d
  14. 25 10月, 2010 1 次提交
  15. 24 10月, 2010 3 次提交
  16. 23 10月, 2010 2 次提交
    • A
      SYSFS: Allow boot time switching between deprecated and modern sysfs layout · e52eec13
      Andi Kleen 提交于
      I have some systems which need legacy sysfs due to old tools that are
      making assumptions that a directory can never be a symlink to another
      directory, and it's a big hazzle to compile separate kernels for them.
      
      This patch turns CONFIG_SYSFS_DEPRECATED into a run time option
      that can be switched on/off the kernel command line. This way
      the same binary can be used in both cases with just a option
      on the command line.
      
      The old CONFIG_SYSFS_DEPRECATED_V2 option is still there to set
      the default. I kept the weird name to not break existing
      config files.
      
      Also the compat code can be still completely disabled by undefining
      CONFIG_SYSFS_DEPRECATED_SWITCH -- just the optimizer takes
      care of this now instead of lots of ifdefs. This makes the code
      look nicer.
      
      v2: This is an updated version on top of Kay's patch to only
      handle the block devices. I tested it on my old systems
      and that seems to work.
      
      Cc: axboe@kernel.dk
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      e52eec13
    • T
      Dynamic Debug: Introduce ddebug_query= boot parameter · a648ec05
      Thomas Renninger 提交于
      Dynamic debug lacks the ability to enable debug messages at boot time.
      One could patch initramfs or service startup scripts to write to
      /sys/../dynamic_debug/control, but this sucks.
      
      This patch makes it possible to pass a query in the same format one can
      write to /sys/../dynamic_debug/control via boot param.
      When dynamic debug gets initialized, this query will automatically be
      applied.
      Signed-off-by: NThomas Renninger <trenn@suse.de>
      Acked-by: jbaron@redhat.com
      Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      a648ec05
  17. 19 10月, 2010 1 次提交
  18. 17 10月, 2010 1 次提交
  19. 17 9月, 2010 1 次提交
  20. 26 8月, 2010 1 次提交
  21. 25 8月, 2010 2 次提交
    • R
      PCI: PCIe: Ask BIOS for control of all native services at once · 28eb5f27
      Rafael J. Wysocki 提交于
      After commit 852972ac (ACPI: Disable
      ASPM if the platform won't provide _OSC control for PCIe) control of
      the PCIe Capability Structure is unconditionally requested by
      acpi_pci_root_add(), which in principle may cause problems to
      happen in two ways.  First, the BIOS may refuse to give control of
      the PCIe Capability Structure if it is not asked for any of the
      _OSC features depending on it at the same time.  Second, the BIOS may
      assume that control of the _OSC features depending on the PCIe
      Capability Structure will be requested in the future and may behave
      incorrectly if that doesn't happen.  For this reason, control of
      the PCIe Capability Structure should always be requested along with
      control of any other _OSC features that may depend on it (ie. PCIe
      native PME, PCIe native hot-plug, PCIe AER).
      
      Rework the PCIe port driver so that (1) it checks which native PCIe
      port services can be enabled, according to the BIOS, and (2) it
      requests control of all these services simultaneously.  In
      particular, this causes pcie_portdrv_probe() to fail if the BIOS
      refuses to grant control of the PCIe Capability Structure, which
      means that no native PCIe port services can be enabled for the PCIe
      Root Complex the given port belongs to.  If that happens, ASPM is
      disabled to avoid problems with mishandling it by the part of the
      PCIe hierarchy for which control of the PCIe Capability Structure
      has not been received.
      
      Make it possible to override this behavior using 'pcie_ports=native'
      (use the PCIe native services regardless of the BIOS response to the
      control request), or 'pcie_ports=compat' (do not use the PCIe native
      services at all).
      
      Accordingly, rework the existing PCIe port service drivers so that
      they don't request control of the services directly.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      28eb5f27
    • R
      PCI: PCIe: Introduce commad line switch for disabling port services · 79dd9182
      Rafael J. Wysocki 提交于
      Introduce kernel command line switch pcie_ports= allowing one to
      disable all of the native PCIe port services, so that PCIe ports
      are treated like PCI-to-PCI bridges.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      79dd9182
  22. 24 8月, 2010 1 次提交
    • A
      x86, vmware: Remove deprecated VMI kernel support · 9863c90f
      Alok Kataria 提交于
      With the recent innovations in CPU hardware acceleration technologies
      from Intel and AMD, VMware ran a few experiments to compare these
      techniques to guest paravirtualization technique on VMware's platform.
      These hardware assisted virtualization techniques have outperformed the
      performance benefits provided by VMI in most of the workloads. VMware
      expects that these hardware features will be ubiquitous in a couple of
      years, as a result, VMware has started a phased retirement of this
      feature from the hypervisor.
      
      Please note that VMI has always been an optimization and non-VMI kernels
      still work fine on VMware's platform.
      Latest versions of VMware's product which support VMI are,
      Workstation 7.0 and VSphere 4.0 on ESX side, future maintainence
      releases for these products will continue supporting VMI.
      
      For more details about VMI retirement take a look at this,
      http://blogs.vmware.com/guestosguide/2009/09/vmi-retirement.html
      
      This feature removal was scheduled for 2.6.37 back in September 2009.
      Signed-off-by: NAlok N Kataria <akataria@vmware.com>
      LKML-Reference: <1282600151.19396.22.camel@ank32.eng.vmware.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      9863c90f
  23. 23 8月, 2010 2 次提交
  24. 16 8月, 2010 1 次提交
  25. 15 8月, 2010 1 次提交
  26. 11 8月, 2010 2 次提交
  27. 10 8月, 2010 1 次提交
  28. 05 8月, 2010 1 次提交
  29. 02 8月, 2010 1 次提交
  30. 31 7月, 2010 1 次提交
    • M
      x86/PCI: Add option to not assign BAR's if not already assigned · 7bd1c365
      Mike Habeck 提交于
      The Linux kernel assigns BARs that a BIOS did not assign, most likely
      to handle broken BIOSes that didn't enumerate the devices correctly.
      On UV the BIOS purposely doesn't assign I/O BARs for certain devices/
      drivers we know don't use them (examples, LSI SAS, Qlogic FC, ...).
      We purposely don't assign these I/O BARs because I/O Space is a very
      limited resource.  There is only 64k of I/O Space, and in a PCIe
      topology that space gets divided up into 4k chucks (this is due to
      the fact that a pci-to-pci bridge's I/O decoder is aligned at 4k)...
      Thus a system can have at most 16 cards with I/O BARs: (64k / 4k = 16)
      
      SGI needs to scale to >16 devices with I/O BARs.  So by not assigning
      I/O BARs on devices we know don't use them, we can do that (iff the
      kernel doesn't go and assign these BARs that the BIOS purposely didn't
      assign).
      
      This patch will not assign a resource to a device BAR if that BAR was
      not assigned by the BIOS, and the kernel cmdline option 'pci=nobar'
      was specified.   This patch is closely modeled after the 'pci=norom'
      option that currently exists in the tree.
      Signed-off-by: NMike Habeck <habeck@sgi.com>
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      7bd1c365
  31. 27 7月, 2010 2 次提交