1. 22 9月, 2011 1 次提交
    • L
      XZ: Fix incorrect XZ_BUF_ERROR · 9c1f8594
      Lasse Collin 提交于
      xz_dec_run() could incorrectly return XZ_BUF_ERROR if all of the
      following was true:
      
       - The caller knows how many bytes of output to expect and only provides
         that much output space.
      
       - When the last output bytes are decoded, the caller-provided input
         buffer ends right before the LZMA2 end of payload marker.  So LZMA2
         won't provide more output anymore, but it won't know it yet and thus
         won't return XZ_STREAM_END yet.
      
       - A BCJ filter is in use and it hasn't left any unfiltered bytes in the
         temp buffer.  This can happen with any BCJ filter, but in practice
         it's more likely with filters other than the x86 BCJ.
      
      This fixes <https://bugzilla.redhat.com/show_bug.cgi?id=735408> where
      Squashfs thinks that a valid file system is corrupt.
      
      This also fixes a similar bug in single-call mode where the uncompressed
      size of a block using BCJ + LZMA2 was 0 bytes and caller provided no
      output space.  Many empty .xz files don't contain any blocks and thus
      don't trigger this bug.
      
      This also tweaks a closely related detail: xz_dec_bcj_run() could call
      xz_dec_lzma2_run() to decode into temp buffer when it was known to be
      useless.  This was harmless although it wasted a minuscule number of CPU
      cycles.
      Signed-off-by: NLasse Collin <lasse.collin@tukaani.org>
      Cc: stable <stable@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9c1f8594
  2. 14 9月, 2011 1 次提交
  3. 31 8月, 2011 1 次提交
    • G
      bitops: Move find_next_bit.o from lib-y to obj-y · bd823821
      Geert Uytterhoeven 提交于
      If there are no builtin users of find_next_bit_le() and
      find_next_zero_bit_le(), these functions are not present in the kernel
      image, causing m68k allmodconfig to fail with:
      
        ERROR: "find_next_zero_bit_le" [fs/ufs/ufs.ko] undefined!
        ERROR: "find_next_bit_le" [fs/udf/udf.ko] undefined!
        ...
      
      This started to happen after commit 171d809d ("m68k: merge mmu and
      non-mmu bitops.h"), as m68k had its own inline versions before.
      
      commit 63e424c8 ("arch: remove CONFIG_GENERIC_FIND_{NEXT_BIT,
      BIT_LE, LAST_BIT}") added find_last_bit.o to obj-y (so it's always
      included), but find_next_bit.o to lib-y (so it gets removed by the
      linker if there are no builtin users).
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Greg Ungerer <gerg@uclinux.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bd823821
  4. 07 8月, 2011 2 次提交
    • D
      crypto: Move md5_transform to lib/md5.c · bc0b96b5
      David S. Miller 提交于
      We are going to use this for TCP/IP sequence number and fragment ID
      generation.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bc0b96b5
    • M
      lib/sha1: use the git implementation of SHA-1 · 1eb19a12
      Mandeep Singh Baines 提交于
      For ChromiumOS, we use SHA-1 to verify the integrity of the root
      filesystem.  The speed of the kernel sha-1 implementation has a major
      impact on our boot performance.
      
      To improve boot performance, we investigated using the heavily optimized
      sha-1 implementation used in git.  With the git sha-1 implementation, we
      see a 11.7% improvement in boot time.
      
      10 reboots, remove slowest/fastest.
      
      Before:
      
        Mean: 6.58 seconds Stdev: 0.14
      
      After (with git sha-1, this patch):
      
        Mean: 5.89 seconds Stdev: 0.07
      
      The other cool thing about the git SHA-1 implementation is that it only
      needs 64 bytes of stack for the workspace while the original kernel
      implementation needed 320 bytes.
      Signed-off-by: NMandeep Singh Baines <msb@chromium.org>
      Cc: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
      Cc: Nicolas Pitre <nico@cam.org>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: linux-crypto@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1eb19a12
  5. 04 8月, 2011 4 次提交
    • H
      tmpfs radix_tree: locate_item to speed up swapoff · e504f3fd
      Hugh Dickins 提交于
      We have already acknowledged that swapoff of a tmpfs file is slower than
      it was before conversion to the generic radix_tree: a little slower
      there will be acceptable, if the hotter paths are faster.
      
      But it was a shock to find swapoff of a 500MB file 20 times slower on my
      laptop, taking 10 minutes; and at that rate it significantly slows down
      my testing.
      
      Now, most of that turned out to be overhead from PROVE_LOCKING and
      PROVE_RCU: without those it was only 4 times slower than before; and
      more realistic tests on other machines don't fare as badly.
      
      I've tried a number of things to improve it, including tagging the swap
      entries, then doing lookup by tag: I'd expected that to halve the time,
      but in practice it's erratic, and often counter-productive.
      
      The only change I've so far found to make a consistent improvement, is
      to short-circuit the way we go back and forth, gang lookup packing
      entries into the array supplied, then shmem scanning that array for the
      target entry.  Scanning in place doubles the speed, so it's now only
      twice as slow as before (or three times slower when the PROVEs are on).
      
      So, add radix_tree_locate_item() as an expedient, once-off,
      single-caller hack to do the lookup directly in place.  #ifdef it on
      CONFIG_SHMEM and CONFIG_SWAP, as much to document its limited
      applicability as save space in other configurations.  And, sadly,
      #include sched.h for cond_resched().
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e504f3fd
    • H
      radix_tree: exceptional entries and indices · 6328650b
      Hugh Dickins 提交于
      A patchset to extend tmpfs to MAX_LFS_FILESIZE by abandoning its
      peculiar swap vector, instead keeping a file's swap entries in the same
      radix tree as its struct page pointers: thus saving memory, and
      simplifying its code and locking.
      
      This patch:
      
      The radix_tree is used by several subsystems for different purposes.  A
      major use is to store the struct page pointers of a file's pagecache for
      memory management.  But what if mm wanted to store something other than
      page pointers there too?
      
      The low bit of a radix_tree entry is already used to denote an indirect
      pointer, for internal use, and the unlikely radix_tree_deref_retry()
      case.
      
      Define the next bit as denoting an exceptional entry, and supply inline
      functions radix_tree_exception() to return non-0 in either unlikely
      case, and radix_tree_exceptional_entry() to return non-0 in the second
      case.
      
      If a subsystem already uses radix_tree with that bit set, no problem: it
      does not affect internal workings at all, but is defined for the
      convenience of those storing well-aligned pointers in the radix_tree.
      
      The radix_tree_gang_lookups have an implicit assumption that the caller
      can deduce the offset of each entry returned e.g.  by the page->index of
      a struct page.  But that may not be feasible for some kinds of item to
      be stored there.
      
      radix_tree_gang_lookup_slot() allow for an optional indices argument,
      output array in which to return those offsets.  The same could be added
      to other radix_tree_gang_lookups, but for now keep it to the only one
      for which we need it.
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6328650b
    • R
      ida: simplified functions for id allocation · 88eca020
      Rusty Russell 提交于
      The current hyper-optimized functions are overkill if you simply want to
      allocate an id for a device.  Create versions which use an internal
      lock.
      
      In followup patches, numerous drivers are converted to use this
      interface.
      
      Thanks to Tejun for feedback.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Acked-by: NTejun Heo <tj@kernel.org>
      Acked-by: NJonathan Cameron <jic23@cam.ac.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      88eca020
    • A
      fault-injection: add ability to export fault_attr in arbitrary directory · dd48c085
      Akinobu Mita 提交于
      init_fault_attr_dentries() is used to export fault_attr via debugfs.
      But it can only export it in debugfs root directory.
      
      Per Forlin is working on mmc_fail_request which adds support to inject
      data errors after a completed host transfer in MMC subsystem.
      
      The fault_attr for mmc_fail_request should be defined per mmc host and
      export it in debugfs directory per mmc host like
      /sys/kernel/debug/mmc0/mmc_fail_request.
      
      init_fault_attr_dentries() doesn't help for mmc_fail_request.  So this
      introduces fault_create_debugfs_attr() which is able to create a
      directory in the arbitrary directory and replace
      init_fault_attr_dentries().
      
      [akpm@linux-foundation.org: extraneous semicolon, per Randy]
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Tested-by: NPer Forlin <per.forlin@linaro.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Randy Dunlap <rdunlap@xenotime.net>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd48c085
  6. 03 8月, 2011 2 次提交
    • H
      lib, Make gen_pool memory allocator lockless · 7f184275
      Huang Ying 提交于
      This version of the gen_pool memory allocator supports lockless
      operation.
      
      This makes it safe to use in NMI handlers and other special
      unblockable contexts that could otherwise deadlock on locks.  This is
      implemented by using atomic operations and retries on any conflicts.
      The disadvantage is that there may be livelocks in extreme cases.  For
      better scalability, one gen_pool allocator can be used for each CPU.
      
      The lockless operation only works if there is enough memory available.
      If new memory is added to the pool a lock has to be still taken.  So
      any user relying on locklessness has to ensure that sufficient memory
      is preallocated.
      
      The basic atomic operation of this allocator is cmpxchg on long.  On
      architectures that don't have NMI-safe cmpxchg implementation, the
      allocator can NOT be used in NMI handler.  So code uses the allocator
      in NMI handler should depend on CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG.
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Reviewed-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      7f184275
    • H
      lib, Add lock-less NULL terminated single list · f49f23ab
      Huang Ying 提交于
      Cmpxchg is used to implement adding new entry to the list, deleting
      all entries from the list, deleting first entry of the list and some
      other operations.
      
      Because this is a single list, so the tail can not be accessed in O(1).
      
      If there are multiple producers and multiple consumers, llist_add can
      be used in producers and llist_del_all can be used in consumers.  They
      can work simultaneously without lock.  But llist_del_first can not be
      used here.  Because llist_del_first depends on list->first->next does
      not changed if list->first is not changed during its operation, but
      llist_del_first, llist_add, llist_add (or llist_del_all, llist_add,
      llist_add) sequence in another consumer may violate that.
      
      If there are multiple producers and one consumer, llist_add can be
      used in producers and llist_del_all or llist_del_first can be used in
      the consumer.
      
      This can be summarized as follow:
      
                 |   add    | del_first |  del_all
       add       |    -     |     -     |     -
       del_first |          |     L     |     L
       del_all   |          |           |     -
      
      Where "-" stands for no lock is needed, while "L" stands for lock is
      needed.
      
      The list entries deleted via llist_del_all can be traversed with
      traversing function such as llist_for_each etc.  But the list entries
      can not be traversed safely before deleted from the list.  The order
      of deleted entries is from the newest to the oldest added one.  If you
      want to traverse from the oldest to the newest, you must reverse the
      order by yourself before traversing.
      
      The basic atomic operation of this list is cmpxchg on long.  On
      architectures that don't have NMI-safe cmpxchg implementation, the
      list can NOT be used in NMI handler.  So code uses the list in NMI
      handler should depend on CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG.
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Reviewed-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      f49f23ab
  7. 27 7月, 2011 7 次提交
  8. 26 7月, 2011 3 次提交
  9. 25 7月, 2011 1 次提交
  10. 23 7月, 2011 1 次提交
  11. 15 7月, 2011 1 次提交
  12. 08 7月, 2011 1 次提交
  13. 07 7月, 2011 1 次提交
  14. 05 7月, 2011 1 次提交
  15. 23 6月, 2011 1 次提交
  16. 20 6月, 2011 1 次提交
    • M
      debugobjects: Fix boot crash when kmemleak and debugobjects enabled · 161b6ae0
      Marcin Slusarz 提交于
      Order of initialization look like this:
      ...
      debugobjects
      kmemleak
      ...(lots of other subsystems)...
      workqueues (through early initcall)
      ...
      
      debugobjects use schedule_work for batch freeing of its data and kmemleak
      heavily use debugobjects, so when it comes to freeing and workqueues were
      not initialized yet, kernel crashes:
      
      BUG: unable to handle kernel NULL pointer dereference at           (null)
      IP: [<ffffffff810854d1>] __queue_work+0x29/0x41a
       [<ffffffff81085910>] queue_work_on+0x16/0x1d
       [<ffffffff81085abc>] queue_work+0x29/0x55
       [<ffffffff81085afb>] schedule_work+0x13/0x15
       [<ffffffff81242de1>] free_object+0x90/0x95
       [<ffffffff81242f6d>] debug_check_no_obj_freed+0x187/0x1d3
       [<ffffffff814b6504>] ? _raw_spin_unlock_irqrestore+0x30/0x4d
       [<ffffffff8110bd14>] ? free_object_rcu+0x68/0x6d
       [<ffffffff8110890c>] kmem_cache_free+0x64/0x12c
       [<ffffffff8110bd14>] free_object_rcu+0x68/0x6d
       [<ffffffff810b58bc>] __rcu_process_callbacks+0x1b6/0x2d9
      ...
      
      because system_wq is NULL.
      
      Fix it by checking if workqueues susbystem was initialized before using.
      Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: stable@kernel.org
      Link: http://lkml.kernel.org/r/20110528112342.GA3068@joi.lanSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      161b6ae0
  17. 16 6月, 2011 1 次提交
  18. 13 6月, 2011 1 次提交
    • A
      Delay struct net freeing while there's a sysfs instance refering to it · a685e089
      Al Viro 提交于
      	* new refcount in struct net, controlling actual freeing of the memory
      	* new method in kobj_ns_type_operations (->drop_ns())
      	* ->current_ns() semantics change - it's supposed to be followed by
      corresponding ->drop_ns().  For struct net in case of CONFIG_NET_NS it bumps
      the new refcount; net_drop_ns() decrements it and calls net_free() if the
      last reference has been dropped.  Method renamed to ->grab_current_ns().
      	* old net_free() callers call net_drop_ns() instead.
      	* sysfs_exit_ns() is gone, along with a large part of callchain
      leading to it; now that the references stored in ->ns[...] stay valid we
      do not need to hunt them down and replace them with NULL.  That fixes
      problems in sysfs_lookup() and sysfs_readdir(), along with getting rid
      of sb->s_instances abuse.
      
      	Note that struct net *shutdown* logics has not changed - net_cleanup()
      is called exactly when it used to be called.  The only thing postponed by
      having a sysfs instance refering to that struct net is actual freeing of
      memory occupied by struct net.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a685e089
  19. 10 6月, 2011 2 次提交
  20. 07 6月, 2011 1 次提交
    • F
      swiotlb: Export swioltb_nr_tbl and utilize it as appropiate. · 5f98ecdb
      FUJITA Tomonori 提交于
      By default the io_tlb_nslabs is set to zero, and gets set to
      whatever value is passed in via swiotlb_init_with_tbl function.
      The default value passed in is 64MB. However, if the user provides
      the 'swiotlb=<nslabs>' the default value is ignored and
      the value provided by the user is used... Except when the SWIOTLB
      is used under Xen - there the default value of 64MB is used and
      the Xen-SWIOTLB has no mechanism to get the 'io_tlb_nslabs' filled
      out by setup_io_tlb_npages functions. This patch provides a function
      for the Xen-SWIOTLB to call to see if the io_tlb_nslabs is set
      and if so use that value.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      5f98ecdb
  21. 04 6月, 2011 2 次提交
  22. 02 6月, 2011 1 次提交
  23. 27 5月, 2011 3 次提交
新手
引导
客服 返回
顶部