1. 09 12月, 2010 1 次提交
    • D
      IB/uverbs: Handle large number of entries in poll CQ · 7182afea
      Dan Carpenter 提交于
      In ib_uverbs_poll_cq() code there is a potential integer overflow if
      userspace passes in a large cmd.ne.  The calls to kmalloc() would
      allocate smaller buffers than intended, leading to memory corruption.
      There iss also an information leak if resp wasn't all used.
      Unprivileged userspace may call this function, although only if an
      RDMA device that uses this function is present.
      
      Fix this by copying CQ entries one at a time, which avoids the
      allocation entirely, and also by moving this copying into a function
      that makes sure to initialize all memory copied to userspace.
      
      Special thanks to Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
      for his help and advice.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NDan Carpenter <error27@gmail.com>
      
      [ Monkey around with things a bit to avoid bad code generation by gcc
        when designated initializers are used.  - Roland ]
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      7182afea
  2. 03 12月, 2010 23 次提交
  3. 02 12月, 2010 14 次提交
  4. 01 12月, 2010 2 次提交
    • D
      xfs: only run xfs_error_test if error injection is active · c76febef
      Dave Chinner 提交于
      Recent tests writing lots of small files showed the flusher thread
      being CPU bound and taking a long time to do allocations on a debug
      kernel. perf showed this as the prime reason:
      
                   samples  pcnt function                    DSO
                   _______ _____ ___________________________ _________________
      
                 224648.00 36.8% xfs_error_test              [kernel.kallsyms]
                  86045.00 14.1% xfs_btree_check_sblock      [kernel.kallsyms]
                  39778.00  6.5% prandom32                   [kernel.kallsyms]
                  37436.00  6.1% xfs_btree_increment         [kernel.kallsyms]
                  29278.00  4.8% xfs_btree_get_rec           [kernel.kallsyms]
                  27717.00  4.5% random32                    [kernel.kallsyms]
      
      Walking btree blocks during allocation checking them requires each
      block (a cache hit, so no I/O) call xfs_error_test(), which then
      does a random32() call as the first operation.  IOWs, ~50% of the
      CPU is being consumed just testing whether we need to inject an
      error, even though error injection is not active.
      
      Kill this overhead when error injection is not active by adding a
      global counter of active error traps and only calling into
      xfs_error_test when fault injection is active.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      c76febef
    • D
      xfs: avoid moving stale inodes in the AIL · de25c181
      Dave Chinner 提交于
      When an inode has been marked stale because the cluster is being
      freed, we don't want to (re-)insert this inode into the AIL. There
      is a race condition where the cluster buffer may be unpinned before
      the inode is inserted into the AIL during transaction committed
      processing. If the buffer is unpinned before the inode item has been
      committed and inserted, then it is possible for the buffer to be
      released and hence processthe stale inode callbacks before the inode
      is inserted into the AIL.
      
      In this case, we then insert a clean, stale inode into the AIL which
      will never get removed by an IO completion. It will, however, get
      reclaimed and that triggers an assert in xfs_inode_free()
      complaining about freeing an inode still in the AIL.
      
      This race can be avoided by not moving stale inodes forward in the AIL
      during transaction commit completion processing. This closes the
      race condition by ensuring we never insert clean stale inodes into
      the AIL. It is safe to do this because a dirty stale inode, by
      definition, must already be in the AIL.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      de25c181