- 13 8月, 2015 1 次提交
-
-
由 Cristina Opriceana 提交于
Change return value to 0 if no device is bound since unsigned int cannot support negative error codes. Fixes: f18e7a06 ("iio: Return -ENODEV for file operations if the device has been unregistered") Signed-off-by: NCristina Opriceana <cristina.opriceana@gmail.com> Cc: <Stable@vger.kernel.org> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 03 8月, 2015 1 次提交
-
-
由 Cristina Opriceana 提交于
Fix kernel docs for structures and functions in order to remove some warnings when the documentation gets generated. Signed-off-by: NCristina Opriceana <cristina.opriceana@gmail.com> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 21 6月, 2015 1 次提交
-
-
由 Octavian Purdila 提交于
This patch changes the semantics of non-blocking reads so that a hardware fifo flush is triggered if the available data in the device buffer is less then the requested size. This allows userspace to accurately generate hardware fifo flushes, by doing a non-blocking read with a size greater then the sum of the device buffer and hardware fifo size. Signed-off-by: NOctavian Purdila <octavian.purdila@intel.com> Reviewed-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 01 6月, 2015 4 次提交
-
-
由 Lars-Peter Clausen 提交于
In hardware mode we can not use the software demuxer, this means that the selected scan mask needs to match one of the available scan masks exactly. It also means that all attached buffers need to use the same scan mask. Given that when operating in hardware mode there is typically only a single buffer attached to the device this not an issue. Add a sanity check to make sure that only a single buffer is attached in hardware mode nevertheless. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
For each buffer type specify the supported device modes for this buffer. This allows us for devices which support multiple different operating modes to pick the correct operating mode based on the modes supported by the attached buffers. It also prevents that buffers with conflicting modes are attached to a device at the same time or that a buffer with a non-supported mode is attached to a device (e.g. in-kernel callback buffer to a device only supporting hardware mode). Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
Even if no userspace consumer buffer is attached to the IIO device at registration we still need to compute the masklength, since it is possible that a in-kernel consumer buffer is going to get attached to the device at a later point. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Laurent Navet 提交于
The same code is executed regardless ret value, so this test can be removed. Also fix coverity scan CID 1268786. Signed-off-by: NLaurent Navet <laurent.navet@gmail.com> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 23 5月, 2015 3 次提交
-
-
由 Lars-Peter Clausen 提交于
Currently when something goes wrong at some step when disabling the buffers we immediately abort. This has the effect that the enable/disable calls are no longer balanced. So make sure that even if one step in the disable sequence fails the other steps are still executed. The other issue is that when either enable or disable fails buffers that were active at that time stay active while the device itself is disabled. This leaves things in a inconsistent state and can cause unbalanced enable/disable calls. Furthermore when enable fails we restore the old scan mask, but still keeps things disabled. Given that verification of the configuration was performed earlier and it is valid at the point where we try to enable/disable the most likely reason of failure is a communication failure with the device or maybe a out-of-memory situation. There is not really a good recovery strategy in such a case, so it makes sense to leave the device disabled, but we should still leave it in a consistent state. What the patch does if disable/enable fails is to deactivate all buffers and make sure that the device will be in the same state as if all buffers had been manually disabled. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
__iio_update_buffers is already a rather large function with many different error paths and it is going to get even larger. This patch factors out the device enable and device disable paths into separate helper functions. The patch also re-implements iio_disable_all_buffers() using the new iio_disable_buffers() function removing a fair bit of redundant code. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
Currently __iio_update_buffers() verifies whether the new configuration will work in the middle of the update sequence. This means if the new configuration is invalid we need to rollback the changes already made. This patch moves the validation of the new configuration at the beginning of __iio_update_buffers() and will not start to make any changes if the new configuration is invalid. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 17 5月, 2015 3 次提交
-
-
由 Lars-Peter Clausen 提交于
We only have to call the request_update() callback for a newly inserted buffer. The configuration of the already previously active buffers will not have changed. This also allows us to move the request_update() call to the beginning of __iio_update_buffers(), before any currently active buffers are stopped. This makes the error handling a lot easier since no changes were made to the buffer list and no rollback needs to be performed. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
Add a small helper function iio_free_scan_mask() that takes a mask and frees its memory if the scan masks for the device are dynamically allocated, otherwise does nothing. This means we don't have to open-code the same check over and over again in __iio_update_buffers. Also free compound_mask as soon a we are done using it. This constrains its usage to a specific region of the function will make further refactoring and splitting the function into smaller sub-parts more easier. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
While more verbose error messages are useful for debugging we should really not put those error messages into the kernel log for normal errors that are already reported to the application via the error code, when running in non-debug mode. Otherwise application authors might expect that this is part of the ABI and to get the error they should scan the kernel log. Which would be rather error prone itself since there is no direct mapping between a operation and the error message so it is impossible to find out which error message belongs to which error. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 29 3月, 2015 2 次提交
-
-
由 Octavian Purdila 提交于
Some devices have hardware buffers that can store a number of samples for later consumption. Hardware usually provides interrupts to notify the processor when the FIFO is full or when it has reached a certain watermark level. This helps with reducing the number of interrupts to the host processor and thus it helps decreasing the power consumption. This patch enables usage of hardware FIFOs for IIO devices in conjunction with software device buffers. When the hardware FIFO is enabled the samples are stored in the hardware FIFO. The samples are later flushed to the device software buffer when the number of entries in the hardware FIFO reaches the hardware watermark or when a flush operation is triggered by the user when doing a non-blocking read on an empty software device buffer. In order to implement hardware FIFO support the device drivers must implement the following new operations: setting and getting the hardware FIFO watermark level, flushing the hardware FIFO to the software device buffer. The device must also expose information about the hardware FIFO such it's minimum and maximum watermark and if necessary a list of supported watermark values. Finally, the device driver must activate the hardware FIFO when the device buffer is enabled, if the current device settings allows it. The software device buffer watermark is passed by the IIO core to the device driver as a hint for the hardware FIFO watermark. The device driver can adjust this value to allow for hardware limitations (such as capping it to the maximum hardware watermark or adjust it to a value that is supported by the hardware). It can also disable the hardware watermark (and implicitly the hardware FIFO) it this value is below the minimum hardware watermark. Since a driver may support hardware FIFO only when not in triggered buffer mode (due to different semantics of hardware FIFO sampling and triggered sampling) this patch changes the IIO core code to allow falling back to non-triggered buffered mode if no trigger is enabled. Signed-off-by: NOctavian Purdila <octavian.purdila@intel.com> Reviewed-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Josselin Costanzi 提交于
Currently the IIO buffer blocking read only wait until at least one data element is available. This patch makes the reader sleep until enough data is collected before returning to userspace. This should limit the read() calls count when trying to get data in batches. Co-author: Yannick Bedhomme <yannick.bedhomme@mobile-devices.fr> Signed-off-by: NJosselin Costanzi <josselin.costanzi@mobile-devices.fr> [rebased and remove buffer timeout] Signed-off-by: NOctavian Purdila <octavian.purdila@intel.com> Reviewed-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 05 2月, 2015 1 次提交
-
-
由 Octavian Purdila 提交于
Move all core (non-custom) buffer attributes to a vector to make it easier to add more of them in the future. Signed-off-by: NOctavian Purdila <octavian.purdila@intel.com> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 26 1月, 2015 1 次提交
-
-
由 Karol Wrona 提交于
There was a need for non triggered software buffer type. It can be used when triggered model does not fit and INDIO_BUFFER_HARDWARE causes confusion because the data stream can be obtained not directly form hardware backend. Suggested-by: NJonathan Cameron <jic23@kernel.org> Signed-off-by: NKarol Wrona <k.wrona@samsung.com> Reviewed-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 06 1月, 2015 1 次提交
-
-
由 Octavian Purdila 提交于
Signed-off-by: NOctavian Purdila <octavian.purdila@intel.com> Reviewed-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 12 12月, 2014 6 次提交
-
-
由 Lars-Peter Clausen 提交于
We already do have the length field in the struct iio_buffer which is expected to be in sync with the current size of the buffer. And currently all implementations of the get_length callback either return this field or a constant number. This patch removes the get_length callback and replaces all occurrences in the IIO core with directly accessing the length field of the buffer. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
If a buffer implementation does not implement the set_length() callback the length will be static and can not be changed by userspace. Mark the length attribute as a read only property in this case so userspace is aware of this rather than just silently accepting any length value. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
All buffers want at least the length and the enable attribute. Move the creation of those attributes to the core instead of having to do this in each individual buffer implementation. This allows us to get rid of some boiler-plate code. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
The next patch will introduce new dependencies in iio_buffer_alloc_sysfs() to functions which are currently defined after iio_buffer_alloc_sysfs(). To avoid forward declarations move both iio_buffer_alloc_sysfs() and iio_buffer_free_sysfs() after those function. This is split into two patches one moving the functions and one adding the dependencies to make review of the actual changes easier. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
Originally device and buffer registration were kept as separate operations in IIO to allow to register two distinct sets of channels for buffered and non-buffered operations. This has since already been further restricted and the channel set registered for the buffer needs to be a subset of the channel set registered for the device. Additionally the possibility to not have a raw (or processed) attribute for a channel which was registered for the device was added a while ago. This means it is possible to not register any device level attributes for a channel even if it is registered for the device. Also if a channel's scan_index is set to -1 and the channel is registered for the buffer it is ignored. So in summary it means it is possible to register the same channel array for both the device and the buffer yet still end up with distinctive sets of channels for both of them. This makes the argument for having to have to manually register the channels for both the device and the buffer invalid. Considering that the vast majority of all drivers want to register the same set of channels for both the buffer and the device it makes sense to move the buffer registration into the core to avoid some boiler-plate code in the device driver setup path. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
Individual drivers should not be messing with the scan mask that contains the list of enabled channels. This is something that is supposed to be managed by the core. Now that the last few drivers that used it to configure a default scan mask have been updated to not do this anymore we can unexport the function. Note, this patch also requires moving a few functions around so they are all declared before the first internal user. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Reviewed-by: NDaniel Baluta <daniel.baluta@intel.com> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 08 8月, 2014 1 次提交
-
-
由 Jonathan Cameron 提交于
The size of the allocation is currently set to the size of the pointer rather than the structure we should actually be allocating. Signed-off-by: NJonathan Cameron <jic23@kernel.org> Reported-by: kbuild@01.org Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NLars-Peter Clausen <lars@metafoo.de>
-
- 02 8月, 2014 1 次提交
-
-
由 Lars-Peter Clausen 提交于
When copying multiple multiple samples that are adjacent in both the source as well as the destination buffer, instead of creating a new demux table entry for each sample just increase the length of the previous entry by the size of the new sample. This makes the demuxing process slightly more efficient. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 28 7月, 2014 1 次提交
-
-
由 Lars-Peter Clausen 提交于
Makes the code slightly shorter and a bit easier to understand. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 20 7月, 2014 1 次提交
-
-
由 Lars-Peter Clausen 提交于
When creating the demux table we need to iterate over the selected scan mask for the buffer to get the samples which should be copied to destination buffer. Right now the code uses the mask which contains all active channels, which means the demux table contains entries which causes it to copy all the samples from source to destination buffer one by one without doing any demuxing. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org> Cc: Stable@vger.kernel.org
-
- 29 6月, 2014 1 次提交
-
-
由 Josselin Costanzi 提交于
Change sca3000_ring implementation so that it exports a data_available function to iio. Signed-off-by: NJosselin Costanzi <josselin.costanzi@mobile-devices.fr> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 30 4月, 2014 1 次提交
-
-
由 Srinivas Pandruvada 提交于
The current scan element type uses the following format: [be|le]:[s|u]bits/storagebits[>>shift]. To specify multiple elements in this type, added a repeat value. So new format is: [be|le]:[s|u]bits/storagebitsXr[>>shift]. Here r is specifying how may times, real/storage bits are repeating. When X is value is 0 or 1, then repeat value is not used in the format, and it will be same as existing format. Signed-off-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 22 3月, 2014 1 次提交
-
-
由 Alec Berg 提交于
Ensure that querying the IIO buffer scan_mask returns a value of 0 or 1. Currently querying the scan mask has the value returned by test_bit(), which returns either true or false. For some architectures test_bit() may return -1 for true, which will appear to return an error when returning from iio_scan_mask_query(). Additionally, it's important for the sysfs interface to consistently return the same thing when querying the scan_mask. Signed-off-by: NAlec Berg <alecaberg@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 18 2月, 2014 1 次提交
-
-
由 Hartmut Knaack 提交于
Get rid of obsolete uses of goto error_ret and some empty lines. Signed-off-by: NHartmut Knaack <knaack.h@gmx.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 04 12月, 2013 2 次提交
-
-
由 Lars-Peter Clausen 提交于
Currently the IIO buffer interface only allows non-blocking reads. This patch adds support for blocking IO. In blocking mode the thread will go to sleep if no data is available and will wait for the buffer implementation to signal that new data is available by waking up the buffers waitqueue. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
This patch adds a new data_available() callback to the iio_buffer_access_funcs struct. The callback is used to indicate whether data is available in the buffer for reading. It is meant to replace the stufftoread flag from the iio_buffer struct. The reasoning for this is that the buffer implementation usually can determine whether data is available rather easily based on its state, on the other hand it can be rather tricky to update the stufftoread flag in a race free way. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 17 10月, 2013 2 次提交
-
-
由 Lars-Peter Clausen 提交于
The functionality implemented by iio_sw_buffer_preenable() is now done directly in the IIO core and previous users of iio_sw_buffer_preenable() have all been updated to not use it anymore. It is unused now and can be remove. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
Currently a IIO device driver needs to make sure to update the buffer's bytes per datum after the scan mask has changed. This is usually done in the preenable callback by invoking iio_sw_buffer_preenable(). This is something that needs to be done and is done for virtually all devices which support buffers (we currently have only one exception). Also this a bit of a layering violation since we have to call the buffer setup ops from the device setup ops. This requires the device driver to know about the internal requirements of the buffer (e.g. whether we need to call the set_bytes_per_datum) callback. And especially with in-kernel buffer consumers, which allows to attach arbitrary buffers to a device, this is something that the driver can't know. Moving this to the core allows us to drop the individual calls to iio_sw_buffer_preenable() from drivers. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Cc: Denis Ciocca <denis.ciocca@st.com> Cc: Marek Vasut <marex@denx.de> Cc: Zubair Lutfullah <zubair.lutfullah@gmail.com> Cc: Jacek Anaszewski <j.anaszewski@samsung.com> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 16 10月, 2013 1 次提交
-
-
由 Lars-Peter Clausen 提交于
Usually the active scan mask is freed in __iio_update_buffers() when the buffer is disabled. But when the device is still sampling when it is removed we'll end up disabling the buffers in iio_disable_all_buffers(). So we also need to free the active scan mask here, otherwise it will be leaked. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 12 10月, 2013 3 次提交
-
-
由 Lars-Peter Clausen 提交于
Since the kernel now disables all buffers when a device is unregistered it might happen that a in-kernel consumer tries to disable that buffer again. So ignore requests where the buffer already is in the desired state. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
We have the same code to free a IIO device attribute list in multiple place. This patch adds a new helper function to take care of this and replaces the custom instances with a call to the helper function. Note that we do not need to call list_del() for each of the list items since we will never look at any of the list items nor the list itself again. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
由 Lars-Peter Clausen 提交于
We need to make sure that in-kernel users of iio_update_buffers() do not race against each other or against unregistration of the device. So we need to take both the mlock and the info_exist_lock when calling iio_update_buffers() from a in-kernel consumer. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-