1. 08 3月, 2015 1 次提交
  2. 20 10月, 2014 1 次提交
  3. 26 8月, 2014 1 次提交
    • L
      regmap: Fix regcache debugfs initialization · 5e0cbe78
      Lars-Peter Clausen 提交于
      Commit 6cfec04b ("regmap: Separate regmap dev initialization") moved the
      regmap debugfs initialization after regcache initialization. This means
      that the regmap debugfs directory is not created yet when the cache
      initialization runs and so any debugfs files registered by the regcache are
      created in the debugfs root directory rather than the debugfs directory of
      the regmap instance. Fix this by adding a separate callback for the
      regcache debugfs initialization which will be called after the parent
      debugfs entry has been created.
      
      Fixes: 6cfec04b (regmap: Separate regmap dev initialization)
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      Cc: stable@vger.kernel.org
      5e0cbe78
  4. 14 4月, 2014 1 次提交
  5. 29 8月, 2013 3 次提交
    • L
      regmap: rbtree: Make cache_present bitmap per node · 3f4ff561
      Lars-Peter Clausen 提交于
      With devices which have a dense and small register map but placed at a large
      offset the global cache_present bitmap imposes a huge memory overhead. Making
      the cache_present per rbtree node avoids the issue and easily reduces the memory
      footprint by a factor of ten. For devices with a more sparse map or without a
      large base register offset the memory usage might increase slightly by a few
      bytes, but not significantly. E.g. for a device which has ~50 registers at
      offset 0x4000 the memory footprint of the register cache goes down form 2496
      bytes to 175 bytes.
      
      Moving the bitmap to a per node basis means that the handling of the bitmap is
      now cache implementation specific and can no longer be managed by the core. The
      regcache_sync_block() function is extended by a additional parameter so that the
      cache implementation can tell the core which registers in the block are set and
      which are not. The parameter is optional and if NULL the core assumes that all
      registers are set. The rbtree cache also needs to implement its own drop
      callback instead of relying on the core to handle this.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      3f4ff561
    • L
      regmap: rbtree: Reduce number of nodes, take 2 · 472fdec7
      Lars-Peter Clausen 提交于
      Support for reducing the number of nodes and memory consumption of the rbtree
      cache by allowing for small unused holes in the node's register cache block was
      initially added in commit 0c7ed856 ("regmap: Cut down on the average # of nodes
      in the rbtree cache"). But the commit had problems and so its effect was
      reverted again in commit 4e67fb5f ("regmap: rbtree: Fix overlapping rbnodes.").
      This patch brings the feature back of reducing the average number of nodes,
      which will speedup node look-up, while at the same time also reducing the memory
      usage of the rbtree cache. This patch takes a slightly different approach than
      the original patch though. It modifies the adjacent node look-up to not only
      consider nodes that are just one to the left or the right of the register but
      any node that falls in a certain range around the register. The range is
      calculated based on how much memory it would take to allocate a new node
      compared to how much memory it takes adding a set of unused registers to an
      existing node. E.g. if a node takes up 24 bytes and each register in a block
      uses 1 byte the range will be from the register address - 24 to the register
      address + 24. If we find a node that falls within this range it is cheaper or as
      expensive to add the register to the existing node and have a couple of unused
      registers in the node's cache compared to allocating a new node.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      472fdec7
    • L
      regmap: rbtree: Simplify adjacent node look-up · 194c753a
      Lars-Peter Clausen 提交于
      A register which is adjacent to a node will either be left to the first
      register or right to the last register. It will not be within the node's range,
      so there is no point in checking for each register cached by the node whether
      the new register is next to it. It is sufficient to check whether the register
      comes before the first register or after the last register of the node.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      194c753a
  6. 27 8月, 2013 1 次提交
  7. 22 8月, 2013 1 次提交
    • D
      regmap: rbtree: Fix overlapping rbnodes. · 4e67fb5f
      David Jander 提交于
      Avoid overlapping register regions by making the initial blklen of a new
      node 1. If a register write occurs to a yet uncached register, that is
      lower than but near an existing node's base_reg, a new node is created
      and it's blklen is set to an arbitrary value (sizeof(*rbnode)). That may
      cause this node to overlap with another node. Those nodes should be merged,
      but this merge doesn't happen yet, so this patch at least makes the initial
      blklen small enough to avoid hitting the wrong node, which may otherwise
      lead to severe breakage.
      Signed-off-by: NDavid Jander <david@protonic.nl>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      Cc: stable@vger.kernel.org
      4e67fb5f
  8. 02 6月, 2013 1 次提交
  9. 24 5月, 2013 1 次提交
  10. 23 5月, 2013 1 次提交
  11. 12 5月, 2013 2 次提交
  12. 30 3月, 2013 2 次提交
  13. 27 3月, 2013 2 次提交
    • M
      regmap: cache: Use raw I/O to sync rbtrees if we can · 137b8334
      Mark Brown 提交于
      This will bring no meaningful benefit by itself, it is done as a separate
      commit to aid bisection if there are problems with the following commits
      adding support for coalescing adjacent writes.
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      137b8334
    • D
      regmap: Cut down on the average # of nodes in the rbtree cache · 0c7ed856
      Dimitris Papastamos 提交于
      This patch aims to bring down the average number of nodes
      in the rbtree cache and increase the average number of registers
      per node.  This should improve general lookup and traversal times.
      This is achieved by setting the minimum size of a block within the
      rbnode to the size of the rbnode itself.  This will essentially
      cache possibly non-existent registers so to combat this scenario,
      we keep a separate bitmap in memory which keeps track of which register
      exists.  The memory overhead of this change is likely in the order of
      ~5-10%, possibly less depending on the register file layout.  On my test
      system with a bitmap of ~4300 bits and a relatively sparse register
      layout, the memory requirements for the entire cache did not increase
      (the cutting down of nodes which was about 50% of the original number
      compensated the situation).
      
      A second patch that can be built on top of this can look at the
      ratio `sizeof(*rbnode) / map->cache_word_size' in order to suitably
      adjust the block length of each block.
      Signed-off-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      0c7ed856
  14. 14 3月, 2013 1 次提交
    • L
      regmap: cache Fix regcache-rbtree sync · 8abac3ba
      Lars-Peter Clausen 提交于
      The last register block, which falls into the specified range, is not handled
      correctly. The formula which calculates the number of register which should be
      synced is inverse (and off by one). E.g. if all registers in that block should
      be synced only one is synced, and if only one should be synced all (but one) are
      synced. To calculate the number of registers that need to be synced we need to
      subtract the number of the first register in the block from the max register
      number and add one. This patch updates the code accordingly.
      
      The issue was introduced in commit ac8d91c8 ("regmap: Supply ranges to the sync
      operations").
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: stable@vger.kernel.org
      8abac3ba
  15. 13 3月, 2013 1 次提交
  16. 04 3月, 2013 2 次提交
  17. 13 4月, 2012 2 次提交
    • S
      regmap: implement register striding · edc9ae42
      Stephen Warren 提交于
      regmap_config.reg_stride is introduced. All extant register addresses
      are a multiple of this value. Users of serial-oriented regmap busses will
      typically set this to 1. Users of the MMIO regmap bus will typically set
      this based on the value size of their registers, in bytes, so 4 for a
      32-bit register.
      
      Throughout the regmap code, actual register addresses are used. Wherever
      the register address is used to index some array of values, the address
      is divided by the stride to determine the index, or vice-versa. Error-
      checking is added to all entry-points for register address data to ensure
      that register addresses actually satisfy the specified stride. The MMIO
      bus ensures that the specified stride is large enough for the register
      size.
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      edc9ae42
    • S
      regmap: introduce fast_io busses, and use a spinlock for them · a42678c4
      Stephen Warren 提交于
      Some bus types have very fast IO. For these, acquiring a mutex for every
      IO operation is a significant overhead. Allow busses to indicate their IO
      is fast, and enhance regmap to use a spinlock for those busses.
      
      [Currently limited to native endian registers -- broonie]
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      a42678c4
  18. 10 4月, 2012 1 次提交
    • S
      regmap: implement register striding · f01ee60f
      Stephen Warren 提交于
      regmap_config.reg_stride is introduced. All extant register addresses
      are a multiple of this value. Users of serial-oriented regmap busses will
      typically set this to 1. Users of the MMIO regmap bus will typically set
      this based on the value size of their registers, in bytes, so 4 for a
      32-bit register.
      
      Throughout the regmap code, actual register addresses are used. Wherever
      the register address is used to index some array of values, the address
      is divided by the stride to determine the index, or vice-versa. Error-
      checking is added to all entry-points for register address data to ensure
      that register addresses actually satisfy the specified stride. The MMIO
      bus ensures that the specified stride is large enough for the register
      size.
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      f01ee60f
  19. 06 4月, 2012 1 次提交
  20. 05 4月, 2012 1 次提交
  21. 01 4月, 2012 1 次提交
  22. 12 3月, 2012 1 次提交
    • P
      device.h: cleanup users outside of linux/include (C files) · 51990e82
      Paul Gortmaker 提交于
      For files that are actively using linux/device.h, make sure
      that they call it out.  This will allow us to clean up some
      of the implicit uses of linux/device.h within include/*
      without introducing build regressions.
      
      Yes, this was created by "cheating" -- i.e. the headers were
      cleaned up, and then the fallout was found and fixed, and then
      the two commits were reordered.  This ensures we don't introduce
      build regressions into the git history.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      51990e82
  23. 06 3月, 2012 2 次提交
  24. 24 2月, 2012 1 次提交
    • M
      regmap: Supply ranges to the sync operations · ac8d91c8
      Mark Brown 提交于
      In order to allow us to support partial sync operations add minimum and
      maximum register arguments to the sync operation and update the rbtree
      and lzo caches to use this new information. The LZO implementation is
      obviously not good, we could exit the iteration earlier, but there may
      be room for more wide reaching optimisation there.
      Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      ac8d91c8
  25. 22 11月, 2011 2 次提交
  26. 16 11月, 2011 1 次提交
  27. 10 10月, 2011 3 次提交
  28. 30 9月, 2011 1 次提交
  29. 28 9月, 2011 1 次提交