1. 30 8月, 2018 1 次提交
  2. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  3. 13 7月, 2017 1 次提交
  4. 21 6月, 2017 1 次提交
    • N
      percpu_counter: Rename __percpu_counter_add to percpu_counter_add_batch · 104b4e51
      Nikolay Borisov 提交于
      Currently, percpu_counter_add is a wrapper around __percpu_counter_add
      which is preempt safe due to explicit calls to preempt_disable.  Given
      how __ prefix is used in percpu related interfaces, the naming
      unfortunately creates the false sense that __percpu_counter_add is
      less safe than percpu_counter_add.  In terms of context-safety,
      they're equivalent.  The only difference is that the __ version takes
      a batch parameter.
      
      Make this a bit more explicit by just renaming __percpu_counter_add to
      percpu_counter_add_batch.
      
      This patch doesn't cause any functional changes.
      
      tj: Minor updates to patch description for clarity.  Cosmetic
          indentation updates.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: David Sterba <dsterba@suse.com>
      Cc: Darrick J. Wong <darrick.wong@oracle.com>
      Cc: Jan Kara <jack@suse.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: linux-mm@kvack.org
      Cc: "David S. Miller" <davem@davemloft.net>
      104b4e51
  5. 20 1月, 2017 1 次提交
  6. 10 11月, 2016 1 次提交
  7. 20 5月, 2016 1 次提交
  8. 29 5月, 2015 1 次提交
    • D
      percpu_counter: batch size aware __percpu_counter_compare() · 80188b0d
      Dave Chinner 提交于
      XFS uses non-stanard batch sizes for avoiding frequent global
      counter updates on it's allocated inode counters, as they increment
      or decrement in batches of 64 inodes. Hence the standard percpu
      counter batch of 32 means that the counter is effectively a global
      counter. Currently Xfs uses a batch size of 128 so that it doesn't
      take the global lock on every single modification.
      
      However, Xfs also needs to compare accurately against zero, which
      means we need to use percpu_counter_compare(), and that has a
      hard-coded batch size of 32, and hence will spuriously fail to
      detect when it is supposed to use precise comparisons and hence
      the accounting goes wrong.
      
      Add __percpu_counter_compare() to take a custom batch size so we can
      use it sanely in XFS and factor percpu_counter_compare() to use it.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      80188b0d
  9. 08 9月, 2014 2 次提交
    • T
      percpu_counter: add @gfp to percpu_counter_init() · 908c7f19
      Tejun Heo 提交于
      Percpu allocator now supports allocation mask.  Add @gfp to
      percpu_counter_init() so that !GFP_KERNEL allocation masks can be used
      with percpu_counters too.
      
      We could have left percpu_counter_init() alone and added
      percpu_counter_init_gfp(); however, the number of users isn't that
      high and introducing _gfp variants to all percpu data structures would
      be quite ugly, so let's just do the conversion.  This is the one with
      the most users.  Other percpu data structures are a lot easier to
      convert.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NJan Kara <jack@suse.cz>
      Acked-by: N"David S. Miller" <davem@davemloft.net>
      Cc: x86@kernel.org
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      908c7f19
    • T
      percpu_counter: make percpu_counters_lock irq-safe · ebd8fef3
      Tejun Heo 提交于
      percpu_counter is scheduled to grow @gfp support to allow atomic
      initialization.  This patch makes percpu_counters_lock irq-safe so
      that it can be safely used from atomic contexts.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ebd8fef3
  10. 09 4月, 2014 1 次提交
    • J
      lib/percpu_counter.c: fix bad percpu counter state during suspend · e39435ce
      Jens Axboe 提交于
      I got a bug report yesterday from Laszlo Ersek in which he states that
      his kvm instance fails to suspend.  Laszlo bisected it down to this
      commit 1cf7e9c6 ("virtio_blk: blk-mq support") where virtio-blk is
      converted to use the blk-mq infrastructure.
      
      After digging a bit, it became clear that the issue was with the queue
      drain.  blk-mq tracks queue usage in a percpu counter, which is
      incremented on request alloc and decremented when the request is freed.
      The initial hunt was for an inconsistency in blk-mq, but everything
      seemed fine.  In fact, the counter only returned crazy values when
      suspend was in progress.
      
      When a CPU is unplugged, the percpu counters merges that CPU state with
      the general state.  blk-mq takes care to register a hotcpu notifier with
      the appropriate priority, so we know it runs after the percpu counter
      notifier.  However, the percpu counter notifier only merges the state
      when the CPU is fully gone.  This leaves a state transition where the
      CPU going away is no longer in the online mask, yet it still holds
      private values.  This means that in this state, percpu_counter_sum()
      returns invalid results, and the suspend then hangs waiting for
      abs(dead-cpu-value) requests to complete which of course will never
      happen.
      
      Fix this by clearing the state earlier, so we never have a case where
      the CPU isn't in online mask but still holds private state.  This bug
      has been there since forever, I guess we don't have a lot of users where
      percpu counters needs to be reliable during the suspend cycle.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      Reported-by: NLaszlo Ersek <lersek@redhat.com>
      Tested-by: NLaszlo Ersek <lersek@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e39435ce
  11. 07 4月, 2014 1 次提交
    • J
      percpu_counter: fix bad counter state during suspend · 60b0ea12
      Jens Axboe 提交于
      I got a bug report yesterday from Laszlo Ersek <lersek@xxxxxxxxxx>, in
      which he states that his kvm instance fails to suspend. He Laszlo
      bisected it down to this commit:
      
      commit 1cf7e9c6
      Author: Jens Axboe <axboe@xxxxxxxxx>
      Date: Fri Nov 1 10:52:52 2013 -0600
      
      virtio_blk: blk-mq support
      
      where virtio-blk is converted to use the blk-mq infrastructure. After
      digging a bit, it became clear that the issue was with the queue drain.
      blk-mq tracks queue usage in a percpu counter, which is incremented on
      request alloc and decremented when the request is freed. The initial
      hunt was for an inconsistency in blk-mq, but everything seemed fine. In
      fact, the counter only returned crazy values when suspend was in
      progress. When a CPU is unplugged, the percpu counters merges that CPU
      state with the general state. blk-mq takes care to register a hotcpu
      notifier with the appropriate priority, so we know it runs after the
      percpu counter notifier. However, the percpu counter notifier only
      merges the state when the CPU is fully gone. This leaves a state
      transition where the CPU going away is no longer in the online mask, yet
      it still holds private values. This means that in this state,
      percpu_counter_sum() returns invalid results, and the suspend then hangs
      waiting for abs(dead-cpu-value) requests to complete which of course
      will never happen.
      
      Fix this by clearing the state earlier, so we never have a case where
      the CPU isn't in online mask but still holds private state. This bug has
      been there since forever, I guess we don't have a lot of users where
      percpu counters needs to be reliable during the suspend cycle.
      
      Reported-by: <lersek@redhat.com>
      Tested-by: NLaszlo Ersek <lersek@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      60b0ea12
  12. 17 1月, 2014 1 次提交
  13. 15 1月, 2014 1 次提交
    • M
      lib/percpu_counter.c: fix __percpu_counter_add() · 74e72f89
      Ming Lei 提交于
      __percpu_counter_add() may be called in softirq/hardirq handler (such
      as, blk_mq_queue_exit() is typically called in hardirq/softirq handler),
      so we need to call this_cpu_add()(irq safe helper) to update percpu
      counter, otherwise counts may be lost.
      
      This fixes the problem that 'rmmod null_blk' hangs in blk_cleanup_queue()
      because of miscounting of request_queue->mq_usage_counter.
      
      This patch is the v1 of previous one of "lib/percpu_counter.c:
      disable local irq when updating percpu couter", and takes Andrew's
      approach which may be more efficient for ARCHs(x86, s390) that
      have optimized this_cpu_add().
      Signed-off-by: NMing Lei <tom.leiming@gmail.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Shaohua Li <shli@fusionio.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Fan Du <fan.du@windriver.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      74e72f89
  14. 25 10月, 2013 1 次提交
  15. 15 7月, 2013 1 次提交
    • P
      kernel: delete __cpuinit usage from all core kernel files · 0db0628d
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      This removes all the uses of the __cpuinit macros from C files in
      the core kernel directories (kernel, init, lib, mm, and include)
      that don't really have a specific maintainer.
      
      [1] https://lkml.org/lkml/2013/5/20/589Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0db0628d
  16. 04 7月, 2013 1 次提交
  17. 31 7月, 2012 1 次提交
  18. 01 11月, 2011 1 次提交
  19. 13 9月, 2011 1 次提交
  20. 17 12月, 2010 1 次提交
    • C
      percpucounter: Optimize __percpu_counter_add a bit through the use of this_cpu() options. · 819a72af
      Christoph Lameter 提交于
      The this_cpu_* options can be used to optimize __percpu_counter_add a bit. Avoids
      some address arithmetic and saves 12 bytes.
      
      Before:
      
      
      00000000000001d3 <__percpu_counter_add>:
       1d3:	55                   	push   %rbp
       1d4:	48 89 e5             	mov    %rsp,%rbp
       1d7:	41 55                	push   %r13
       1d9:	41 54                	push   %r12
       1db:	53                   	push   %rbx
       1dc:	48 89 fb             	mov    %rdi,%rbx
       1df:	48 83 ec 08          	sub    $0x8,%rsp
       1e3:	4c 8b 67 30          	mov    0x30(%rdi),%r12
       1e7:	65 4c 03 24 25 00 00 	add    %gs:0x0,%r12
       1ee:	00 00
       1f0:	4d 63 2c 24          	movslq (%r12),%r13
       1f4:	48 63 c2             	movslq %edx,%rax
       1f7:	49 01 f5             	add    %rsi,%r13
       1fa:	49 39 c5             	cmp    %rax,%r13
       1fd:	7d 0a                	jge    209 <__percpu_counter_add+0x36>
       1ff:	f7 da                	neg    %edx
       201:	48 63 d2             	movslq %edx,%rdx
       204:	49 39 d5             	cmp    %rdx,%r13
       207:	7f 1e                	jg     227 <__percpu_counter_add+0x54>
       209:	48 89 df             	mov    %rbx,%rdi
       20c:	e8 00 00 00 00       	callq  211 <__percpu_counter_add+0x3e>
       211:	4c 01 6b 18          	add    %r13,0x18(%rbx)
       215:	48 89 df             	mov    %rbx,%rdi
       218:	41 c7 04 24 00 00 00 	movl   $0x0,(%r12)
       21f:	00
       220:	e8 00 00 00 00       	callq  225 <__percpu_counter_add+0x52>
       225:	eb 04                	jmp    22b <__percpu_counter_add+0x58>
       227:	45 89 2c 24          	mov    %r13d,(%r12)
       22b:	5b                   	pop    %rbx
       22c:	5b                   	pop    %rbx
       22d:	41 5c                	pop    %r12
       22f:	41 5d                	pop    %r13
       231:	c9                   	leaveq
       232:	c3                   	retq
      
      
      After:
      
      00000000000001d3 <__percpu_counter_add>:
       1d3:	55                   	push   %rbp
       1d4:	48 63 ca             	movslq %edx,%rcx
       1d7:	48 89 e5             	mov    %rsp,%rbp
       1da:	41 54                	push   %r12
       1dc:	53                   	push   %rbx
       1dd:	48 89 fb             	mov    %rdi,%rbx
       1e0:	48 8b 47 30          	mov    0x30(%rdi),%rax
       1e4:	65 44 8b 20          	mov    %gs:(%rax),%r12d
       1e8:	4d 63 e4             	movslq %r12d,%r12
       1eb:	49 01 f4             	add    %rsi,%r12
       1ee:	49 39 cc             	cmp    %rcx,%r12
       1f1:	7d 0a                	jge    1fd <__percpu_counter_add+0x2a>
       1f3:	f7 da                	neg    %edx
       1f5:	48 63 d2             	movslq %edx,%rdx
       1f8:	49 39 d4             	cmp    %rdx,%r12
       1fb:	7f 21                	jg     21e <__percpu_counter_add+0x4b>
       1fd:	48 89 df             	mov    %rbx,%rdi
       200:	e8 00 00 00 00       	callq  205 <__percpu_counter_add+0x32>
       205:	4c 01 63 18          	add    %r12,0x18(%rbx)
       209:	48 8b 43 30          	mov    0x30(%rbx),%rax
       20d:	48 89 df             	mov    %rbx,%rdi
       210:	65 c7 00 00 00 00 00 	movl   $0x0,%gs:(%rax)
       217:	e8 00 00 00 00       	callq  21c <__percpu_counter_add+0x49>
       21c:	eb 04                	jmp    222 <__percpu_counter_add+0x4f>
       21e:	65 44 89 20          	mov    %r12d,%gs:(%rax)
       222:	5b                   	pop    %rbx
       223:	41 5c                	pop    %r12
       225:	c9                   	leaveq
       226:	c3                   	retq
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Reviewed-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      819a72af
  21. 27 10月, 2010 3 次提交
    • C
      percpu_counter: use this_cpu_ptr() instead of per_cpu_ptr() · ea00c30b
      Christoph Lameter 提交于
      this_cpu_ptr() avoids an array lookup and can use the percpu offset of the
      local cpu directly.
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea00c30b
    • T
      percpu_counter: add debugobj support · e2852ae8
      Tejun Heo 提交于
      All percpu counters are linked to a global list on initialization and
      removed from it on destruction.  The list is walked during CPU up/down.
      If a percpu counter is freed without being properly destroyed, the system
      will oops only on the next CPU up/down making it pretty nasty to track
      down.  This patch adds debugobj support for percpu counters so that such
      problems can be found easily.
      
      As percpu counters don't make sense on stack and can't be statically
      initialized, debugobj support is pretty simple.  It's initialized and
      activated on counter initialization, and deactivatd and destroyed on
      counter destruction.  With this patch applied, the bug fixed by commit
      602586a8 (shmem: put_super must
      percpu_counter_destroy) triggers the following warning on tmpfs unmount
      and the system won't oops on the next cpu up/down operation.
      
       ------------[ cut here ]------------
       WARNING: at lib/debugobjects.c:259 debug_print_object+0x5c/0x70()
       Hardware name: Bochs
       ODEBUG: free active (active state 0) object type: percpu_counter
       Modules linked in:
       Pid: 3999, comm: umount Not tainted 2.6.36-rc2-work+ #5
       Call Trace:
        [<ffffffff81083f7f>] warn_slowpath_common+0x7f/0xc0
        [<ffffffff81084076>] warn_slowpath_fmt+0x46/0x50
        [<ffffffff813b45cc>] debug_print_object+0x5c/0x70
        [<ffffffff813b50e5>] debug_check_no_obj_freed+0x125/0x210
        [<ffffffff811577d3>] kfree+0xb3/0x2f0
        [<ffffffff81132edd>] shmem_put_super+0x1d/0x30
        [<ffffffff81162e96>] generic_shutdown_super+0x56/0xe0
        [<ffffffff81162f86>] kill_anon_super+0x16/0x60
        [<ffffffff81162ff7>] kill_litter_super+0x27/0x30
        [<ffffffff81163295>] deactivate_locked_super+0x45/0x60
        [<ffffffff81163cfa>] deactivate_super+0x4a/0x70
        [<ffffffff8117d446>] mntput_no_expire+0x86/0xe0
        [<ffffffff8117df7f>] sys_umount+0x6f/0x360
        [<ffffffff8103f01b>] system_call_fastpath+0x16/0x1b
       ---[ end trace cce2a341ba3611a7 ]---
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: Thomas Gleixner <tglxlinutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e2852ae8
    • M
      percpu: fix list_head init bug in __percpu_counter_init() · 8474b591
      Masanori ITOH 提交于
      WARNING: at lib/list_debug.c:26 __list_add+0x3f/0x81()
      Hardware name: Express5800/B120a [N8400-085]
      list_add corruption. next->prev should be prev (ffffffff81a7ea00), but was dead000000200200. (next=ffff88080b872d58).
      Modules linked in: aoe ipt_MASQUERADE iptable_nat nf_nat autofs4 sunrpc bridge 8021q garp stp llc ipv6 cpufreq_ondemand acpi_cpufreq freq_table dm_round_robin dm_multipath kvm_intel kvm uinput lpfc scsi_transport_fc igb ioatdma scsi_tgt i2c_i801 i2c_core dca iTCO_wdt iTCO_vendor_support pcspkr shpchp megaraid_sas [last unloaded: aoe]
      Pid: 54, comm: events/3 Tainted: G        W  2.6.34-vanilla1 #1
      Call Trace:
      [<ffffffff8104bd77>] warn_slowpath_common+0x7c/0x94
      [<ffffffff8104bde6>] warn_slowpath_fmt+0x41/0x43
      [<ffffffff8120fd2e>] __list_add+0x3f/0x81
      [<ffffffff81212a12>] __percpu_counter_init+0x59/0x6b
      [<ffffffff810d8499>] bdi_init+0x118/0x17e
      [<ffffffff811f2c50>] blk_alloc_queue_node+0x79/0x143
      [<ffffffff811f2d2b>] blk_alloc_queue+0x11/0x13
      [<ffffffffa02a931d>] aoeblk_gdalloc+0x8e/0x1c9 [aoe]
      [<ffffffffa02aa655>] aoecmd_sleepwork+0x25/0xa8 [aoe]
      [<ffffffff8106186c>] worker_thread+0x1a9/0x237
      [<ffffffffa02aa630>] ? aoecmd_sleepwork+0x0/0xa8 [aoe]
      [<ffffffff81065827>] ? autoremove_wake_function+0x0/0x39
      [<ffffffff810616c3>] ? worker_thread+0x0/0x237
      [<ffffffff810653ad>] kthread+0x7f/0x87
      [<ffffffff8100aa24>] kernel_thread_helper+0x4/0x10
      [<ffffffff8106532e>] ? kthread+0x0/0x87
      [<ffffffff8100aa20>] ? kernel_thread_helper+0x0/0x10
      
      It's because there is no initialization code for a list_head contained in
      the struct backing_dev_info under CONFIG_HOTPLUG_CPU, and the bug comes up
      when block device drivers calling blk_alloc_queue() are used.  In case of
      me, I got them by using aoe.
      Signed-off-by: NMasanori Itoh <itoumsn@nttdata.co.jp>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8474b591
  22. 10 8月, 2010 1 次提交
  23. 07 1月, 2009 1 次提交
  24. 29 12月, 2008 1 次提交
  25. 11 12月, 2008 3 次提交
    • A
      revert "percpu_counter: new function percpu_counter_sum_and_set" · 02d21168
      Andrew Morton 提交于
      Revert
      
          commit e8ced39d
          Author: Mingming Cao <cmm@us.ibm.com>
          Date:   Fri Jul 11 19:27:31 2008 -0400
      
              percpu_counter: new function percpu_counter_sum_and_set
      
      As described in
      
      	revert "percpu counter: clean up percpu_counter_sum_and_set()"
      
      the new percpu_counter_sum_and_set() is racy against updates to the
      cpu-local accumulators on other CPUs.  Revert that change.
      
      This means that ext4 will be slow again.  But correct.
      Reported-by: NEric Dumazet <dada1@cosmosbay.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mingming Cao <cmm@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: <stable@kernel.org>		[2.6.27.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      02d21168
    • A
      revert "percpu counter: clean up percpu_counter_sum_and_set()" · 71c5576f
      Andrew Morton 提交于
      Revert
      
          commit 1f7c14c6
          Author: Mingming Cao <cmm@us.ibm.com>
          Date:   Thu Oct 9 12:50:59 2008 -0400
      
              percpu counter: clean up percpu_counter_sum_and_set()
      
      Before this patch we had the following:
      
      percpu_counter_sum(): return the percpu_counter's value
      
      percpu_counter_sum_and_set(): return the percpu_counter's value, copying
      that value into the central value and zeroing the per-cpu counters before
      returning.
      
      After this patch, percpu_counter_sum_and_set() has gone, and
      percpu_counter_sum() gets the old percpu_counter_sum_and_set()
      functionality.
      
      Problem is, as Eric points out, the old percpu_counter_sum_and_set()
      functionality was racy and wrong.  It zeroes out counters on "other" cpus,
      without holding any locks which will prevent races agaist updates from
      those other CPUS.
      
      This patch reverts 1f7c14c6.  This means
      that percpu_counter_sum_and_set() still has the race, but
      percpu_counter_sum() does not.
      
      Note that this is not a simple revert - ext4 has since started using
      percpu_counter_sum() for its dirty_blocks counter as well.
      
      Note that this revert patch changes percpu_counter_sum() semantics.
      
      Before the patch, a call to percpu_counter_sum() will bring the counter's
      central counter mostly up-to-date, so a following percpu_counter_read()
      will return a close value.
      
      After this patch, a call to percpu_counter_sum() will leave the counter's
      central accumulator unaltered, so a subsequent call to
      percpu_counter_read() can now return a significantly inaccurate result.
      
      If there is any code in the tree which was introduced after
      e8ced39d was merged, and which depends
      upon the new percpu_counter_sum() semantics, that code will break.
      Reported-by: NEric Dumazet <dada1@cosmosbay.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mingming Cao <cmm@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71c5576f
    • E
      percpu_counter: fix CPU unplug race in percpu_counter_destroy() · fd3d664f
      Eric Dumazet 提交于
      We should first delete the counter from percpu_counters list
      before freeing memory, or a percpu_counter_hotcpu_callback()
      could dereference a NULL pointer.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mingming Cao <cmm@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fd3d664f
  26. 10 10月, 2008 1 次提交
  27. 12 7月, 2008 1 次提交
    • M
      percpu_counter: new function percpu_counter_sum_and_set · e8ced39d
      Mingming Cao 提交于
      Delayed allocation need to check free blocks at every write time.
      percpu_counter_read_positive() is not quit accurate. delayed
      allocation need a more accurate accounting, but using
      percpu_counter_sum_positive() is frequently is quite expensive.
      
      This patch added a new function to update center counter when sum
      per-cpu counter, to increase the accurate rate for next
      percpu_counter_read() and require less calling expensive
      percpu_counter_sum().
      Signed-off-by: NMingming Cao <cmm@us.ibm.com>
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      e8ced39d
  28. 30 4月, 2008 1 次提交
  29. 20 10月, 2007 1 次提交
  30. 17 10月, 2007 6 次提交