- 08 3月, 2015 1 次提交
-
-
由 Lars-Peter Clausen 提交于
When inserting a new register into a block at the lower end the present bitmap is currently shifted into the wrong direction. The effect of this is that the bitmap becomes corrupted and registers which are present might be reported as not present and vice versa. Fix this by shifting left rather than right. Fixes: 472fdec7("regmap: rbtree: Reduce number of nodes, take 2") Reported-by: NDaniel Baluta <daniel.baluta@gmail.com> Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org
-
- 20 10月, 2014 1 次提交
-
-
由 Xiubo Li 提交于
If the inlcude headers aren't sorted alphabetically, then the logical choice is to append new ones, however that creates a lot of potential for conflicts or duplicates because every change will then add new includes in the same location. Signed-off-by: NXiubo Li <Li.Xiubo@freescale.com> Signed-off-by: NMark Brown <broonie@kernel.org>
-
- 26 8月, 2014 1 次提交
-
-
由 Lars-Peter Clausen 提交于
Commit 6cfec04b ("regmap: Separate regmap dev initialization") moved the regmap debugfs initialization after regcache initialization. This means that the regmap debugfs directory is not created yet when the cache initialization runs and so any debugfs files registered by the regcache are created in the debugfs root directory rather than the debugfs directory of the regmap instance. Fix this by adding a separate callback for the regcache debugfs initialization which will be called after the parent debugfs entry has been created. Fixes: 6cfec04b (regmap: Separate regmap dev initialization) Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@linaro.org> Cc: stable@vger.kernel.org
-
- 14 4月, 2014 1 次提交
-
-
由 Jean-Christophe PINCE 提交于
Change regcache_rbtree_node strcuture fields order to align the pointers on 64bits architectures. Signed-off-by: NJean-Christophe PINCE <jean-christophe.pince@intel.com> Signed-off-by: NDavid Cohen <david.a.cohen@linux.intel.com> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 29 8月, 2013 3 次提交
-
-
由 Lars-Peter Clausen 提交于
With devices which have a dense and small register map but placed at a large offset the global cache_present bitmap imposes a huge memory overhead. Making the cache_present per rbtree node avoids the issue and easily reduces the memory footprint by a factor of ten. For devices with a more sparse map or without a large base register offset the memory usage might increase slightly by a few bytes, but not significantly. E.g. for a device which has ~50 registers at offset 0x4000 the memory footprint of the register cache goes down form 2496 bytes to 175 bytes. Moving the bitmap to a per node basis means that the handling of the bitmap is now cache implementation specific and can no longer be managed by the core. The regcache_sync_block() function is extended by a additional parameter so that the cache implementation can tell the core which registers in the block are set and which are not. The parameter is optional and if NULL the core assumes that all registers are set. The rbtree cache also needs to implement its own drop callback instead of relying on the core to handle this. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Lars-Peter Clausen 提交于
Support for reducing the number of nodes and memory consumption of the rbtree cache by allowing for small unused holes in the node's register cache block was initially added in commit 0c7ed856 ("regmap: Cut down on the average # of nodes in the rbtree cache"). But the commit had problems and so its effect was reverted again in commit 4e67fb5f ("regmap: rbtree: Fix overlapping rbnodes."). This patch brings the feature back of reducing the average number of nodes, which will speedup node look-up, while at the same time also reducing the memory usage of the rbtree cache. This patch takes a slightly different approach than the original patch though. It modifies the adjacent node look-up to not only consider nodes that are just one to the left or the right of the register but any node that falls in a certain range around the register. The range is calculated based on how much memory it would take to allocate a new node compared to how much memory it takes adding a set of unused registers to an existing node. E.g. if a node takes up 24 bytes and each register in a block uses 1 byte the range will be from the register address - 24 to the register address + 24. If we find a node that falls within this range it is cheaper or as expensive to add the register to the existing node and have a couple of unused registers in the node's cache compared to allocating a new node. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Lars-Peter Clausen 提交于
A register which is adjacent to a node will either be left to the first register or right to the last register. It will not be within the node's range, so there is no point in checking for each register cached by the node whether the new register is next to it. It is sufficient to check whether the register comes before the first register or after the last register of the node. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 27 8月, 2013 1 次提交
-
-
由 Lars-Peter Clausen 提交于
There are a couple of calculations, which convert between register addresses and block indices, in regcache_rbtree_sync() and regcache_rbtree_node_alloc() which assume that reg_stride is 1. This will break the rb cache for configurations which do not use a reg_stride of 1. Also rename 'base' in regcache_rbtree_sync() to 'start' to avoid confusion with 'base_reg'. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 22 8月, 2013 1 次提交
-
-
由 David Jander 提交于
Avoid overlapping register regions by making the initial blklen of a new node 1. If a register write occurs to a yet uncached register, that is lower than but near an existing node's base_reg, a new node is created and it's blklen is set to an arbitrary value (sizeof(*rbnode)). That may cause this node to overlap with another node. Those nodes should be merged, but this merge doesn't happen yet, so this patch at least makes the initial blklen small enough to avoid hitting the wrong node, which may otherwise lead to severe breakage. Signed-off-by: NDavid Jander <david@protonic.nl> Signed-off-by: NMark Brown <broonie@linaro.org> Cc: stable@vger.kernel.org
-
- 02 6月, 2013 1 次提交
-
-
由 Maarten ter Huurne 提交于
A node starting before the minimum register is no reason to reject it, since its end could be in range. The check for the end already exists two lines lower, so we can just remove the incorrect check. Signed-off-by: NMaarten ter Huurne <maarten@treewalker.org> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 24 5月, 2013 1 次提交
-
-
由 Lars-Peter Clausen 提交于
The parameter passed to the regmap lock/unlock callbacks needs to be map->lock_arg, regcache passes just map. This works fine in the case that no custom locking callbacks are used since in this case map->lock_arg equals map, but will break when custom locking callbacks are used. The issue was introduced in commit 0d4529c5("regmap: make lock/unlock functions customizable") and is fixed by this patch. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 23 5月, 2013 1 次提交
-
-
由 Lars-Peter Clausen 提交于
The parameter passed to the regmap lock/unlock callbacks needs to be map->lock_arg, regcache passes just map. This works fine in the case that no custom locking callbacks are used, since in this case map->lock_arg equals map, but will break when custom locking callbacks are used. The issue was introduced in commit 0d4529c5 ("regmap: make lock/unlock functions customizable") and is fixed by this patch. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 12 5月, 2013 2 次提交
-
-
由 Mark Brown 提交于
If range information has been provided then when we allocate a rbnode within a range allocate the entire range. The goal is to minimise the number of reallocations done when combining or extending blocks. At present only readability and yes_ranges are taken into account, this is expected to cover most cases efficiently. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
由 Mark Brown 提交于
In preparation for being slightly smarter about how we allocate memory factor out the node allocation. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 30 3月, 2013 2 次提交
-
-
由 Mark Brown 提交于
The idea of holding blocks of registers in device format is shared between at least rbtree and lzo cache formats so split out the loop that does the sync from the rbtree code so optimisations on it can be reused. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Reviewed-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com>
-
由 Mark Brown 提交于
The idea of maintaining a bitmap of present registers is something that can usefully be used by other cache types that maintain blocks of cached registers so move the code out of the rbtree cache and into the generic regcache code. Refactor the interface slightly as we go to wrap the set bit and enlarge bitmap operations (since we never do one without the other) and make it more robust for reads of uncached registers by bounds checking before we look at the bitmap. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Reviewed-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com>
-
- 27 3月, 2013 2 次提交
-
-
由 Mark Brown 提交于
This will bring no meaningful benefit by itself, it is done as a separate commit to aid bisection if there are problems with the following commits adding support for coalescing adjacent writes. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
由 Dimitris Papastamos 提交于
This patch aims to bring down the average number of nodes in the rbtree cache and increase the average number of registers per node. This should improve general lookup and traversal times. This is achieved by setting the minimum size of a block within the rbnode to the size of the rbnode itself. This will essentially cache possibly non-existent registers so to combat this scenario, we keep a separate bitmap in memory which keeps track of which register exists. The memory overhead of this change is likely in the order of ~5-10%, possibly less depending on the register file layout. On my test system with a bitmap of ~4300 bits and a relatively sparse register layout, the memory requirements for the entire cache did not increase (the cutting down of nodes which was about 50% of the original number compensated the situation). A second patch that can be built on top of this can look at the ratio `sizeof(*rbnode) / map->cache_word_size' in order to suitably adjust the block length of each block. Signed-off-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 14 3月, 2013 1 次提交
-
-
由 Lars-Peter Clausen 提交于
The last register block, which falls into the specified range, is not handled correctly. The formula which calculates the number of register which should be synced is inverse (and off by one). E.g. if all registers in that block should be synced only one is synced, and if only one should be synced all (but one) are synced. To calculate the number of registers that need to be synced we need to subtract the number of the first register in the block from the max register number and add one. This patch updates the code accordingly. The issue was introduced in commit ac8d91c8 ("regmap: Supply ranges to the sync operations"). Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Cc: stable@vger.kernel.org
-
- 13 3月, 2013 1 次提交
-
-
由 Dimitris Papastamos 提交于
Provide a feel of how much overhead the rbtree cache adds to the game. [Slightly reworded output in debugfs -- broonie] Signed-off-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 04 3月, 2013 2 次提交
-
-
由 Mark Brown 提交于
It's more idiomatic to pass the map structure around and this means we can use other bits of information from the map. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
由 Mark Brown 提交于
If we're updating a value in place it's more work to read the value and compare the value with what we're about to set than it is to just write the value into the cache; there are no further operations after writing in the code even though there's an early return here. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 13 4月, 2012 2 次提交
-
-
由 Stephen Warren 提交于
regmap_config.reg_stride is introduced. All extant register addresses are a multiple of this value. Users of serial-oriented regmap busses will typically set this to 1. Users of the MMIO regmap bus will typically set this based on the value size of their registers, in bytes, so 4 for a 32-bit register. Throughout the regmap code, actual register addresses are used. Wherever the register address is used to index some array of values, the address is divided by the stride to determine the index, or vice-versa. Error- checking is added to all entry-points for register address data to ensure that register addresses actually satisfy the specified stride. The MMIO bus ensures that the specified stride is large enough for the register size. Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
由 Stephen Warren 提交于
Some bus types have very fast IO. For these, acquiring a mutex for every IO operation is a significant overhead. Allow busses to indicate their IO is fast, and enhance regmap to use a spinlock for those busses. [Currently limited to native endian registers -- broonie] Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 10 4月, 2012 1 次提交
-
-
由 Stephen Warren 提交于
regmap_config.reg_stride is introduced. All extant register addresses are a multiple of this value. Users of serial-oriented regmap busses will typically set this to 1. Users of the MMIO regmap bus will typically set this based on the value size of their registers, in bytes, so 4 for a 32-bit register. Throughout the regmap code, actual register addresses are used. Wherever the register address is used to index some array of values, the address is divided by the stride to determine the index, or vice-versa. Error- checking is added to all entry-points for register address data to ensure that register addresses actually satisfy the specified stride. The MMIO bus ensures that the specified stride is large enough for the register size. Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 06 4月, 2012 1 次提交
-
-
由 Stephen Warren 提交于
Some bus types have very fast IO. For these, acquiring a mutex for every IO operation is a significant overhead. Allow busses to indicate their IO is fast, and enhance regmap to use a spinlock for those busses. [Currently limited to native endian registers -- broonie] Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 05 4月, 2012 1 次提交
-
-
由 Stephen Warren 提交于
If there are no nodes in the cache, nodes will be 0, so calculating "registers / nodes" will cause division by zero. Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Cc: stable@vger.kernel.org
-
- 01 4月, 2012 1 次提交
-
-
由 Lars-Peter Clausen 提交于
The code currently passes the register offset in the current block to regcache_lookup_reg. This works fine as long as there is only one block and with base register of 0, but in all other cases it will look-up the default for a wrong register, which can cause unnecessary register writes. This patch fixes it by passing the actual register number to regcache_lookup_reg. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Cc: <stable@vger.kernel.org>
-
- 12 3月, 2012 1 次提交
-
-
由 Paul Gortmaker 提交于
For files that are actively using linux/device.h, make sure that they call it out. This will allow us to clean up some of the implicit uses of linux/device.h within include/* without introducing build regressions. Yes, this was created by "cheating" -- i.e. the headers were cleaned up, and then the fallout was found and fixed, and then the two commits were reordered. This ensures we don't introduce build regressions into the git history. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 06 3月, 2012 2 次提交
-
-
由 Mark Brown 提交于
Otherwise we'll end up running with bogus register numbers. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
由 Mark Brown 提交于
Most of the current users have register 0 as a volatile register or don't have a register 0 so it's not been apparent that it's not getting synced. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 24 2月, 2012 1 次提交
-
-
由 Mark Brown 提交于
In order to allow us to support partial sync operations add minimum and maximum register arguments to the sync operation and update the rbtree and lzo caches to use this new information. The LZO implementation is obviously not good, we could exit the iteration earlier, but there may be room for more wide reaching optimisation there. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 22 11月, 2011 2 次提交
-
-
由 Mark Brown 提交于
The debugfs functions don't stub themselves out quite so well as might be desirable so provide functions which do do this stubbing. Reported-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
由 Mark Brown 提交于
Show the register ranges we have in each rbtree node in debugfs, plus some statistics on how big each node is and the total number of nodes. It may also be worth collecting data on the ranges of dirty registers to see if there's much mileage in trying to coalesce writes on sync. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 16 11月, 2011 1 次提交
-
-
由 Lars-Peter Clausen 提交于
Calling regcache_exit from regcache_rbtree_init is first of all a layering violation and secondly will cause double frees. regcache_exit will free buffers allocated by the core, but the core will also free the same buffers when the cacheops init callback returns an error. Thus we end up with a double free. Fix this by not calling regcache_exit but only free those buffers which, have been allocated in this function. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Acked-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 10 10月, 2011 3 次提交
-
-
由 Mark Brown 提交于
Simplify the check for registers set at their default value by avoiding picking a default value in the case where we don't have one. Instead we only compare the current value to the current value when we looked one up. This fixes the case where we don't have a default stored but the value was set to zero when that isn't the chip default. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Acked-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com>
-
由 Mark Brown 提交于
Ensure that when we start up in cache only mode we can store defaults of zero, otherwise if the hardware is unavailable we won't be able to read. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Acked-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com>
-
由 Mark Brown 提交于
If a register isn't cached then let callers know that so they can fall back or error handle appropriately. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Acked-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com>
-
- 30 9月, 2011 1 次提交
-
-
由 Dimitris Papastamos 提交于
Signed-off-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
- 28 9月, 2011 1 次提交
-
-
由 Lars-Peter Clausen 提交于
Move the handling of the cached rbnode into regcache_rbtree_lookup. This allows us to remove of some duplicated code sections in regcache_rbtree_read and regcache_rbtree_write. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Acked-by: NDimitris Papastamos <dp@opensource.wolfsonmicro.com> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-