1. 24 7月, 2011 1 次提交
  2. 23 7月, 2011 4 次提交
  3. 22 7月, 2011 15 次提交
  4. 21 7月, 2011 20 次提交
    • C
      netfilter: ipset: fix compiler warnings "'hash_ip4_data_next' declared inline after being called" · 0f598f0b
      Chris Friesen 提交于
      Some gcc versions warn about prototypes without "inline" when the declaration
      includes the "inline" keyword. The fix generates a false error message
      "marked inline, but without a definition" with sparse below 0.4.2.
      Signed-off-by: NChris Friesen <chris.friesen@genband.com>
      Signed-off-by: NJozsef Kadlecsik <kadlec@blackhole.kfki.hu>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      0f598f0b
    • J
      netfilter: ipset: hash:net,iface fixed to handle overlapping nets behind different interfaces · 89dc79b7
      Jozsef Kadlecsik 提交于
      If overlapping networks with different interfaces was added to
      the set, the type did not handle it properly. Example
      
          ipset create test hash:net,iface
          ipset add test 192.168.0.0/16,eth0
          ipset add test 192.168.0.0/24,eth1
      
      Now, if a packet was sent from 192.168.0.0/24,eth0, the type returned
      a match.
      
      In the patch the algorithm is fixed in order to correctly handle
      overlapping networks.
      
      Limitation: the same network cannot be stored with more than 64 different
      interfaces in a single set.
      Signed-off-by: NJozsef Kadlecsik <kadlec@blackhole.kfki.hu>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      89dc79b7
    • J
    • J
      mutex: Make mutex_destroy() an inline function · 4582c0a4
      Jean Delvare 提交于
      The non-debug variant of mutex_destroy is a no-op, currently
      implemented as a macro which does nothing. This approach fails
      to check the type of the parameter, so an error would only show
      when debugging gets enabled. Using an inline function instead,
      offers type checking for earlier bug catching.
      Signed-off-by: NJean Delvare <khali@linux-fr.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20110716174200.41002352@endymion.delvareSigned-off-by: NIngo Molnar <mingo@elte.hu>
      4582c0a4
    • W
      fs:update the NOTE of the file_operations structure · 295cc522
      Wanlong Gao 提交于
      Big kernel lock had been removed and setlease now use the lock_flocks()
      to hold a special spin lock file_lock_lock by Matthew.
      So just remove the out-of-date NOTE.
      Signed-off-by: NWanlong Gao <gaowanlong@cn.fujitsu.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      295cc522
    • J
      fs: push i_mutex and filemap_write_and_wait down into ->fsync() handlers · 02c24a82
      Josef Bacik 提交于
      Btrfs needs to be able to control how filemap_write_and_wait_range() is called
      in fsync to make it less of a painful operation, so push down taking i_mutex and
      the calling of filemap_write_and_wait() down into the ->fsync() handlers.  Some
      file systems can drop taking the i_mutex altogether it seems, like ext3 and
      ocfs2.  For correctness sake I just pushed everything down in all cases to make
      sure that we keep the current behavior the same for everybody, and then each
      individual fs maintainer can make up their mind about what to do from there.
      Thanks,
      Acked-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      02c24a82
    • J
      fs: add SEEK_HOLE and SEEK_DATA flags · 982d8165
      Josef Bacik 提交于
      This just gets us ready to support the SEEK_HOLE and SEEK_DATA flags.  Turns out
      using fiemap in things like cp cause more problems than it solves, so lets try
      and give userspace an interface that doesn't suck.  We need to match solaris
      here, and the definitions are
      
      *o* If /whence/ is SEEK_HOLE, the offset of the start of the
      next hole greater than or equal to the supplied offset
      is returned. The definition of a hole is provided near
      the end of the DESCRIPTION.
      
      *o* If /whence/ is SEEK_DATA, the file pointer is set to the
      start of the next non-hole file region greater than or
      equal to the supplied offset.
      
      So in the generic case the entire file is data and there is a virtual hole at
      the end.  That means we will just return i_size for SEEK_HOLE and will return
      the same offset for SEEK_DATA.  This is how Solaris does it so we have to do it
      the same way.
      
      Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      982d8165
    • K
      fs: seq_file - add event counter to simplify poll() support · f1514638
      Kay Sievers 提交于
      Moving the event counter into the dynamically allocated 'struc seq_file'
      allows poll() support without the need to allocate its own tracking
      structure.
      
      All current users are switched over to use the new counter.
      
      Requested-by: Andrew Morton akpm@linux-foundation.org
      Acked-by: NNeilBrown <neilb@suse.de>
      Tested-by: Lucas De Marchi lucas.demarchi@profusion.mobi
      Signed-off-by: NKay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      f1514638
    • C
      fs: simplify the blockdev_direct_IO prototype · aacfc19c
      Christoph Hellwig 提交于
      Simple filesystems always pass inode->i_sb_bdev as the block device
      argument, and never need a end_io handler.  Let's simply things for
      them and for my grepping activity by dropping these arguments.  The
      only thing not falling into that scheme is ext4, which passes and
      end_io handler without needing special flags (yet), but given how
      messy the direct I/O code there is use of __blockdev_direct_IO
      in one instead of two out of three cases isn't going to make a large
      difference anyway.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      aacfc19c
    • C
      rw_semaphore: remove up/down_read_non_owner · 11b80f45
      Christoph Hellwig 提交于
      Now that the last users is gone these can be removed.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      11b80f45
    • C
      fs: kill i_alloc_sem · bd5fe6c5
      Christoph Hellwig 提交于
      i_alloc_sem is a rather special rw_semaphore.  It's the last one that may
      be released by a non-owner, and it's write side is always mirrored by
      real exclusion.  It's intended use it to wait for all pending direct I/O
      requests to finish before starting a truncate.
      
      Replace it with a hand-grown construct:
      
       - exclusion for truncates is already guaranteed by i_mutex, so it can
         simply fall way
       - the reader side is replaced by an i_dio_count member in struct inode
         that counts the number of pending direct I/O requests.  Truncate can't
         proceed as long as it's non-zero
       - when i_dio_count reaches non-zero we wake up a pending truncate using
         wake_up_bit on a new bit in i_flags
       - new references to i_dio_count can't appear while we are waiting for
         it to read zero because the direct I/O count always needs i_mutex
         (or an equivalent like XFS's i_iolock) for starting a new operation.
      
      This scheme is much simpler, and saves the space of a spinlock_t and a
      struct list_head in struct inode (typically 160 bits on a non-debug 64-bit
      system).
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      bd5fe6c5
    • T
      anonfd: fix missing declaration · e46ebd27
      Tomasz Stanislawski 提交于
      The forward declaration of struct file_operations is
      added to avoid compilation warnings.
      Signed-off-by: NTomasz Stanislawski <t.stanislaws@samsung.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e46ebd27
    • D
      superblock: add filesystem shrinker operations · 0e1fdafd
      Dave Chinner 提交于
      Now we have a per-superblock shrinker implementation, we can add a
      filesystem specific callout to it to allow filesystem internal
      caches to be shrunk by the superblock shrinker.
      
      Rather than perpetuate the multipurpose shrinker callback API (i.e.
      nr_to_scan == 0 meaning "tell me how many objects freeable in the
      cache), two operations will be added. The first will return the
      number of objects that are freeable, the second is the actual
      shrinker call.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      0e1fdafd
    • D
      superblock: introduce per-sb cache shrinker infrastructure · b0d40c92
      Dave Chinner 提交于
      With context based shrinkers, we can implement a per-superblock
      shrinker that shrinks the caches attached to the superblock. We
      currently have global shrinkers for the inode and dentry caches that
      split up into per-superblock operations via a coarse proportioning
      method that does not batch very well.  The global shrinkers also
      have a dependency - dentries pin inodes - so we have to be very
      careful about how we register the global shrinkers so that the
      implicit call order is always correct.
      
      With a per-sb shrinker callout, we can encode this dependency
      directly into the per-sb shrinker, hence avoiding the need for
      strictly ordering shrinker registrations. We also have no need for
      any proportioning code for the shrinker subsystem already provides
      this functionality across all shrinkers. Allowing the shrinker to
      operate on a single superblock at a time means that we do less
      superblock list traversals and locking and reclaim should batch more
      effectively. This should result in less CPU overhead for reclaim and
      potentially faster reclaim of items from each filesystem.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b0d40c92
    • P
      mmc: core: Set non-default Drive Strength via platform hook · ca8e99b3
      Philip Rakity 提交于
      Non default Drive Strength cannot be set automatically.  It is a function
      of the board design and only if there is a specific platform handler can
      it be set.  The platform handler needs to take into account the board
      design.  Pass to the platform code the necessary information.
      
      For example:  The card and host controller may indicate they support HIGH
      and LOW drive strength.  There is no way to know what should be chosen
      without specific board knowledge.  Setting HIGH may lead to reflections
      and setting LOW may not suffice.  There is no mechanism (like ethernet
      duplex or speed pulses) to determine what should be done automatically.
      
      If no platform handler is defined -- use the default value.
      Signed-off-by: NPhilip Rakity <prakity@marvell.com>
      Reviewed-by: NArindam Nath <arindam.nath@amd.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      ca8e99b3
    • P
      mmc: core: add non-blocking mmc request function · aa8b683a
      Per Forlin 提交于
      Previously there has only been one function mmc_wait_for_req()
      to start and wait for a request. This patch adds:
      
       * mmc_start_req() - starts a request wihtout waiting
         If there is on ongoing request wait for completion
         of that request and start the new one and return.
         Does not wait for the new command to complete.
      
      This patch also adds new function members in struct mmc_host_ops
      only called from core.c:
      
       * pre_req - asks the host driver to prepare for the next job
       * post_req - asks the host driver to clean up after a completed job
      
      The intention is to use pre_req() and post_req() to do cache maintenance
      while a request is active. pre_req() can be called while a request is
      active to minimize latency to start next job. post_req() can be used after
      the next job is started to clean up the request. This will minimize the
      host driver request end latency. post_req() is typically used before
      ending the block request and handing over the buffer to the block layer.
      
      Add a host-private member in mmc_data to be used by pre_req to mark the
      data. The host driver will then check this mark to see if the data is
      prepared or not.
      Signed-off-by: NPer Forlin <per.forlin@linaro.org>
      Acked-by: NKyungmin Park <kyungmin.park@samsung.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NVenkatraman S <svenkatr@ti.com>
      Tested-by: NSourav Poddar <sourav.poddar@ti.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      aa8b683a
    • J
      mmc: dw_mmc: fix stop when fallen back to PIO · 03e8cb53
      James Hogan 提交于
      There are several situations when dw_mci_submit_data_dma() decides to
      fall back to PIO mode instead of using DMA, due to a short (to avoid
      overhead) or "complex" (e.g. with unaligned buffers) transaction, even
      though host->use_dma is set. However dw_mci_stop_dma() decides whether
      to stop DMA or set the EVENT_XFER_COMPLETE event based on host->use_dma.
      When falling back to PIO mode this results in data timeout errors
      getting missed and the driver locking up.
      
      Therefore add host->using_dma to indicate whether the current
      transaction is using dma or not, and adjust dw_mci_stop_dma() to use
      that instead.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NWill Newton <will.newton@imgtec.com>
      Tested-by: NJaehoon Chung <jh80.chung@samsung.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      03e8cb53
    • A
      mmc: queue: let host controllers specify maximum discard timeout · e056a1b5
      Adrian Hunter 提交于
      Some host controllers will not operate without a hardware
      timeout that is limited in value.  However large discards
      require large timeouts, so there needs to be a way to
      specify the maximum discard size.
      
      A host controller driver may now specify the maximum discard
      timeout possible so that max_discard_sectors can be calculated.
      
      However, for eMMC when the High Capacity Erase Group Size
      is not in use, the timeout calculation depends on clock
      rate which may change.  For that case Preferred Erase Size
      is used instead.
      Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      e056a1b5
    • J
      mmc: dw_mmc: handle unaligned buffers and sizes · 34b664a2
      James Hogan 提交于
      Update functions for PIO pushing and pulling data to and from the FIFO
      so that they can handle unaligned output buffers and unaligned buffer
      lengths. This makes more of the tests in mmc_test pass.
      
      Unaligned lengths in pulls are handled by reading the full FIFO item,
      and storing the remaining bytes in a small internal buffer (part_buf).
      The next data pull will copy data out of this buffer first before
      accessing the FIFO again. Similarly, for pushes the final bytes that
      don't fill a FIFO item are stored in the part_buf (or sent anyway if
      it's the last transfer), and then the part_buf is included at the
      beginning of the next buffer pushed.
      
      Unaligned buffers in pulls are handled specially if the architecture
      cannot do efficient unaligned accesses, by reading FIFO items into a
      aligned local buffer, and memcpy'ing them into the output buffer, again
      storing any remaining bytes in the internal buffer. Similarly for pushes
      the buffer is memcpy'd into an aligned local buffer then written to the
      FIFO.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NWill Newton <will.newton@imgtec.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      34b664a2
    • J
      mmc: dw_mmc: don't hard code fifo depth, fix usage · b86d8253
      James Hogan 提交于
      The FIFO_DEPTH hardware configuration parameter can be found from the
      power-on value of RX_WMark in the FIFOTH register. This is used to
      initialise the watermarks, but when calculating the number of free fifo
      spaces a preprocessor definition is used which is hard coded to 32.
      
      Fix reading the value out of FIFOTH (the default value in the RX_WMark
      field is FIFO_DEPTH-1 not FIFO_DEPTH). Allow the fifo depth to be
      overriden by platform data (since a bootloader may have changed FIFOTH
      making auto-detection unreliable). Store the fifo_depth for later use.
      Also fix the calculation to find the number of free bytes in the fifo to
      include the fifo depth in the left shift by the data shift, since the
      fifo depth is measured in fifo items not bytes.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NWill Newton <will.newton@imgtec.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      b86d8253