1. 31 12月, 2008 1 次提交
  2. 30 12月, 2008 25 次提交
  3. 29 12月, 2008 14 次提交
    • J
      DMI: add dmi_match · d61c72e5
      Jiri Slaby 提交于
      Add a wrapper for testing system_info which will handle also NULL
      system infos.
      
      This will be used by the ata PIIX driver.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Cc: Alexandru Romanescu <a_romanescu@yahoo.co.uk>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Signed-off-by: NJeff Garzik <jgarzik@redhat.com>
      d61c72e5
    • P
      slab: Fix comment on #endif · dfcd3610
      Pascal Terjan 提交于
      This #endif in slab.h is described as closing the inner block while it's for
      the big CONFIG_NUMA one. That makes reading the code a bit harder.
      
      This trivial patch fixes the comment.
      Signed-off-by: NPascal Terjan <pterjan@mandriva.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      dfcd3610
    • A
      SLUB: failslab support · 773ff60e
      Akinobu Mita 提交于
      Currently fault-injection capability for SLAB allocator is only
      available to SLAB. This patch makes it available to SLUB, too.
      
      [penberg@cs.helsinki.fi: unify slab and slub implementations]
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Matt Mackall <mpm@selenic.com>
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      773ff60e
    • D
      DRM: add mode setting support · f453ba04
      Dave Airlie 提交于
      Add mode setting support to the DRM layer.
      
      This is a fairly big chunk of work that allows DRM drivers to provide
      full output control and configuration capabilities to userspace.  It was
      motivated by several factors:
        - the fb layer's APIs aren't suited for anything but simple
          configurations
        - coordination between the fb layer, DRM layer, and various userspace
          drivers is poor to non-existent (radeonfb excepted)
        - user level mode setting drivers makes displaying panic & oops
          messages more difficult
        - suspend/resume of graphics state is possible in many more
          configurations with kernel level support
      
      This commit just adds the core DRM part of the mode setting APIs.
      Driver specific commits using these new structure and APIs will follow.
      
      Co-authors: Jesse Barnes <jbarnes@virtuousgeek.org>, Jakob Bornecrantz <jakob@tungstengraphics.com>
      Contributors: Alan Hourihane <alanh@tungstengraphics.com>, Maarten Maathuis <madman2003@gmail.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: NEric Anholt <eric@anholt.net>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      f453ba04
    • J
      Get rid of CONFIG_LSF · b3a6ffe1
      Jens Axboe 提交于
      We have two seperate config entries for large devices/files. One
      is CONFIG_LBD that guards just the devices, the other is CONFIG_LSF
      that handles large files. This doesn't make a lot of sense, you typically
      want both or none. So get rid of CONFIG_LSF and change CONFIG_LBD wording
      to indicate that it covers both.
      Acked-by: NJean Delvare <khali@linux-fr.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      b3a6ffe1
    • J
      block: add one-hit cache for disk partition lookup · a6f23657
      Jens Axboe 提交于
      disk_map_sector_rcu() returns a partition from a sector offset,
      which we use for IO statistics on a per-partition basis. The
      lookup itself is an O(N) list lookup, where N is the number of
      partitions. This actually hurts performance quite a bit, even
      on the lower end partitions. On higher numbered partitions,
      it can get pretty bad.
      
      Solve this by adding a one-hit cache for partition lookup.
      This makes the lookup O(1) for the case where we do most IO to
      one partition. Even for mixed partition workloads, amortized cost
      is pretty close to O(1) since the natural IO batching makes the
      one-hit cache last for lots of IOs.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      a6f23657
    • J
      block: get rid of elevator_t typedef · b374d18a
      Jens Axboe 提交于
      Just use struct elevator_queue everywhere instead.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      b374d18a
    • J
      aio: make the lookup_ioctx() lockless · abf137dd
      Jens Axboe 提交于
      The mm->ioctx_list is currently protected by a reader-writer lock,
      so we always grab that lock on the read side for doing ioctx
      lookups. As the workload is extremely reader biased, turn this into
      an rcu hlist so we can make lookup_ioctx() lockless. Get rid of
      the rwlock and use a spinlock for providing update side exclusion.
      
      There's usually only 1 entry on this list, so it doesn't make sense
      to look into fancier data structures.
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      abf137dd
    • J
      bio: add support for inlining a number of bio_vecs inside the bio · 392ddc32
      Jens Axboe 提交于
      When we go and allocate a bio for IO, we actually do two allocations.
      One for the bio itself, and one for the bi_io_vec that holds the
      actual pages we are interested in.
      
      This feature inlines a definable amount of io vecs inside the bio
      itself, so we eliminate the bio_vec array allocation for IO's up
      to a certain size. It defaults to 4 vecs, which is typically 16k
      of IO.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      392ddc32
    • J
      bio: allow individual slabs in the bio_set · bb799ca0
      Jens Axboe 提交于
      Instead of having a global bio slab cache, add a reference to one
      in each bio_set that is created. This allows for personalized slabs
      in each bio_set, so that they can have bios of different sizes.
      
      This means we can personalize the bios we return. File systems may
      want to embed the bio inside another structure, to avoid allocation
      more items (and stuffing them in ->bi_private) after the get a bio.
      Or we may want to embed a number of bio_vecs directly at the end
      of a bio, to avoid doing two allocations to return a bio. This is now
      possible.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      bb799ca0
    • J
      bio: move the slab pointer inside the bio_set · 1b434498
      Jens Axboe 提交于
      In preparation for adding differently sized bios.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      1b434498
    • J
      bio: only mempool back the largest bio_vec slab cache · 7ff9345f
      Jens Axboe 提交于
      We only very rarely need the mempool backing, so it makes sense to
      get rid of all but one of the mempool in a bio_set. So keep the
      largest bio_vec count mempool so we can always honor the largest
      allocation, and "upgrade" callers that fail.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      7ff9345f
    • T
      block: simplify empty barrier implementation · 58eea927
      Tejun Heo 提交于
      Empty barrier required special handling in __elv_next_request() to
      complete it without letting the low level driver see it.
      
      With previous changes, barrier code is now flexible enough to skip the
      BAR step using the same barrier sequence selection mechanism.  Drop
      the special handling and mask off q->ordered from start_ordered().
      
      Remove blk_empty_barrier() test which now has no user.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      58eea927
    • T
      block: make barrier completion more robust · 8f11b3e9
      Tejun Heo 提交于
      Barrier completion had the following assumptions.
      
      * start_ordered() couldn't finish the whole sequence properly.  If all
        actions are to be skipped, q->ordseq is set correctly but the actual
        completion was never triggered thus hanging the barrier request.
      
      * Drain completion in elv_complete_request() assumed that there's
        always at least one request in the queue when drain completes.
      
      Both assumptions are true but these assumptions need to be removed to
      improve empty barrier implementation.  This patch makes the following
      changes.
      
      * Make start_ordered() use blk_ordered_complete_seq() to mark skipped
        steps complete and notify __elv_next_request() that it should fetch
        the next request if the whole barrier has completed inside
        start_ordered().
      
      * Make drain completion path in elv_complete_request() check whether
        the queue is empty.  Empty queue also indicates drain completion.
      
      * While at it, convert 0/1 return from blk_do_ordered() to false/true.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      8f11b3e9