1. 23 5月, 2013 1 次提交
  2. 12 5月, 2013 2 次提交
  3. 30 3月, 2013 2 次提交
  4. 27 3月, 2013 2 次提交
    • M
      regmap: cache: Use raw I/O to sync rbtrees if we can · 137b8334
      Mark Brown 提交于
      This will bring no meaningful benefit by itself, it is done as a separate
      commit to aid bisection if there are problems with the following commits
      adding support for coalescing adjacent writes.
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      137b8334
    • D
      regmap: Cut down on the average # of nodes in the rbtree cache · 0c7ed856
      Dimitris Papastamos 提交于
      This patch aims to bring down the average number of nodes
      in the rbtree cache and increase the average number of registers
      per node.  This should improve general lookup and traversal times.
      This is achieved by setting the minimum size of a block within the
      rbnode to the size of the rbnode itself.  This will essentially
      cache possibly non-existent registers so to combat this scenario,
      we keep a separate bitmap in memory which keeps track of which register
      exists.  The memory overhead of this change is likely in the order of
      ~5-10%, possibly less depending on the register file layout.  On my test
      system with a bitmap of ~4300 bits and a relatively sparse register
      layout, the memory requirements for the entire cache did not increase
      (the cutting down of nodes which was about 50% of the original number
      compensated the situation).
      
      A second patch that can be built on top of this can look at the
      ratio `sizeof(*rbnode) / map->cache_word_size' in order to suitably
      adjust the block length of each block.
      Signed-off-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      0c7ed856
  5. 14 3月, 2013 1 次提交
    • L
      regmap: cache Fix regcache-rbtree sync · 8abac3ba
      Lars-Peter Clausen 提交于
      The last register block, which falls into the specified range, is not handled
      correctly. The formula which calculates the number of register which should be
      synced is inverse (and off by one). E.g. if all registers in that block should
      be synced only one is synced, and if only one should be synced all (but one) are
      synced. To calculate the number of registers that need to be synced we need to
      subtract the number of the first register in the block from the max register
      number and add one. This patch updates the code accordingly.
      
      The issue was introduced in commit ac8d91c8 ("regmap: Supply ranges to the sync
      operations").
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: stable@vger.kernel.org
      8abac3ba
  6. 13 3月, 2013 1 次提交
  7. 04 3月, 2013 2 次提交
  8. 13 4月, 2012 2 次提交
    • S
      regmap: implement register striding · edc9ae42
      Stephen Warren 提交于
      regmap_config.reg_stride is introduced. All extant register addresses
      are a multiple of this value. Users of serial-oriented regmap busses will
      typically set this to 1. Users of the MMIO regmap bus will typically set
      this based on the value size of their registers, in bytes, so 4 for a
      32-bit register.
      
      Throughout the regmap code, actual register addresses are used. Wherever
      the register address is used to index some array of values, the address
      is divided by the stride to determine the index, or vice-versa. Error-
      checking is added to all entry-points for register address data to ensure
      that register addresses actually satisfy the specified stride. The MMIO
      bus ensures that the specified stride is large enough for the register
      size.
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      edc9ae42
    • S
      regmap: introduce fast_io busses, and use a spinlock for them · a42678c4
      Stephen Warren 提交于
      Some bus types have very fast IO. For these, acquiring a mutex for every
      IO operation is a significant overhead. Allow busses to indicate their IO
      is fast, and enhance regmap to use a spinlock for those busses.
      
      [Currently limited to native endian registers -- broonie]
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      a42678c4
  9. 10 4月, 2012 1 次提交
    • S
      regmap: implement register striding · f01ee60f
      Stephen Warren 提交于
      regmap_config.reg_stride is introduced. All extant register addresses
      are a multiple of this value. Users of serial-oriented regmap busses will
      typically set this to 1. Users of the MMIO regmap bus will typically set
      this based on the value size of their registers, in bytes, so 4 for a
      32-bit register.
      
      Throughout the regmap code, actual register addresses are used. Wherever
      the register address is used to index some array of values, the address
      is divided by the stride to determine the index, or vice-versa. Error-
      checking is added to all entry-points for register address data to ensure
      that register addresses actually satisfy the specified stride. The MMIO
      bus ensures that the specified stride is large enough for the register
      size.
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      f01ee60f
  10. 06 4月, 2012 1 次提交
  11. 05 4月, 2012 1 次提交
  12. 01 4月, 2012 1 次提交
  13. 12 3月, 2012 1 次提交
    • P
      device.h: cleanup users outside of linux/include (C files) · 51990e82
      Paul Gortmaker 提交于
      For files that are actively using linux/device.h, make sure
      that they call it out.  This will allow us to clean up some
      of the implicit uses of linux/device.h within include/*
      without introducing build regressions.
      
      Yes, this was created by "cheating" -- i.e. the headers were
      cleaned up, and then the fallout was found and fixed, and then
      the two commits were reordered.  This ensures we don't introduce
      build regressions into the git history.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      51990e82
  14. 06 3月, 2012 2 次提交
  15. 24 2月, 2012 1 次提交
    • M
      regmap: Supply ranges to the sync operations · ac8d91c8
      Mark Brown 提交于
      In order to allow us to support partial sync operations add minimum and
      maximum register arguments to the sync operation and update the rbtree
      and lzo caches to use this new information. The LZO implementation is
      obviously not good, we could exit the iteration earlier, but there may
      be room for more wide reaching optimisation there.
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      ac8d91c8
  16. 22 11月, 2011 2 次提交
  17. 16 11月, 2011 1 次提交
  18. 10 10月, 2011 3 次提交
  19. 30 9月, 2011 1 次提交
  20. 28 9月, 2011 2 次提交
  21. 27 9月, 2011 1 次提交
  22. 20 9月, 2011 1 次提交
    • D
      regmap: Add the rbtree cache support · 28644c80
      Dimitris Papastamos 提交于
      This patch adds support for the rbtree cache compression type.
      
      Each rbnode manages a variable length block of registers.  There can be no
      two nodes with overlapping blocks.  Each block has a base register and a
      currently top register, all the other registers, if any, lie in between these
      two and in ascending order.
      
      The reasoning behind the construction of this rbtree is simple.  In the
      snd_soc_rbtree_cache_init() function, we iterate over the register defaults
      provided by the regcache core.  For each register value that is non-zero we
      insert it in the rbtree.  In order to determine in which rbnode we need
      to add the register, we first look if there is another register already
      added that is adjacent to the one we are about to add.  If that is the case
      we append it in that rbnode block, otherwise we create a new rbnode
      with a single register in its block and add it to the tree.
      
      There are various optimizations across the implementation to speed up lookups
      by caching the most recently used rbnode.
      Signed-off-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com>
      Tested-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      28644c80