- 30 10月, 2014 6 次提交
-
-
由 fryshorts 提交于
This adds helper function to disable/enable all properties which is used in the device selected callback to enable/disable the properties when the selected device is available/unavailable.
-
由 fryshorts 提交于
This adds some code to the device enumeration that checks if the currently selected device is present. In case it is not it will add the device but disable it.
-
由 fryshorts 提交于
This moves the calls to the property modified functions so the old handler can close the device. Otherwise this would cause the device to be opened multiple times.
-
由 fryshorts 提交于
This replaces the var in the source struct that was used to print out the number of captured frames with a local one.
-
由 fryshorts 提交于
This replaces the var in the source struct that are handling the timestamp offset with a local one in the capture thread. This change is mostly to make the code more readable.
-
由 fryshorts 提交于
This moves the enabling/resetting of the file descriptors inside the capture loop so it is done before each select call. Why this even worked before is unclear, but doing it the *right* way seems to reduce latency.
-
- 29 10月, 2014 6 次提交
- 28 10月, 2014 2 次提交
-
-
由 Jim 提交于
Remove ingest server live-ord.twitch.tv from services.json
-
由 Gol-D-Ace 提交于
Server doesn't exist anymore https://api.twitch.tv/kraken/ingests http://blog.twitch.tv/2014/09/service-update-chicago-point-of-presence-closure/
-
- 27 10月, 2014 1 次提交
-
-
由 Palana 提交于
Transparency is now disabled by default, so that alpha values from injected back buffers don't propagate to OBS (e.g. Minecraft doesn't render properly in OBS unless "Allow Transparency" is disabled)
-
- 24 10月, 2014 5 次提交
-
-
由 jp9000 提交于
When a new device starts up, make it so that the first timestamp that occurs starts from 0. This prevents the internal source timestamp handling from trying to buffer new frames to the new timestamp value in case the device changes.
-
由 jp9000 提交于
Due to potential driver issues with certain devices, the timestamps are not always reliable. This option allows of using the time in which the frame was received as a timestamp instead.
-
由 jp9000 提交于
This moves the 'flags' variable from the obs_source_frame structure to the obs_source structure, and allows user flags to be set for a specific source. Having it set on the obs_source_frame structure didn't make much sense. OBS_SOURCE_UNBUFFERED makes it so that the source does not buffer its async video output in order to try to play it on time. In other words, frames are played as soon as possible after being received. Useful when you want a source to play back as quickly as possible (webcams, certain types of capture devices)
-
- 23 10月, 2014 12 次提交
-
-
由 jp9000 提交于
This bug would happen if audio packets started being received before video packets. It would erroneously cause audio packets to be completely thrown away, and in certain cases would cause audio and video to start way out of sync. My original intention was "don't accept audio until video has started", but instead mistakenly had the effect of "don't start audio until a video packet has been received". This was originally was intended as a way to handle outputs hooking in to active encoders and compensating their existing timestamp information. However, this made me realize that there was a major flaw in the design for handling this, so I basically rewrote the entire thing. Now, it does the following steps when inserting packets: - Insert packets in to the interleaved packet array - When both audio/video packets are received, prune packets up until the point in which both audio/video start at the same time - Resort the interleaved packet array I have tested this code extensively and it appears to be working well, regardless of whether or not the encoders were already active with another output.
-
由 jp9000 提交于
Did not multiply the element size for the memory move operation.
-
由 jp9000 提交于
In video-io.c, video frames could skip, but what would happen is the frame's timestamp would repeat for the next frame, giving the next frame a non-monotonic timestamp, and then jump. This could mess up syncing slightly when the frame is finally given to an outputs.
-
由 jp9000 提交于
Apparently I unintentionally typed received_video = false twice instead of one for video and one for audio. This fixes a bug where audio would not start up again on an output that had recently started and then stopped.
-
由 jp9000 提交于
When the output sets a new audio/video encoder, it was not properly removing itself from the previous audio/video encoders it was associated with. It was erroneously removing itself from the encoder parameter instead.
-
由 jp9000 提交于
At the start of each render loop, it would get the timestamp, and then it would then assign that timestamp to whatever frame was downloaded. However, the frame that was downloaded was usually occurred a number of frames ago, so it would assign the wrong timestamp value to that frame. This fixes that issue by storing the timestamps in a circular buffer.
-
由 jp9000 提交于
If audio timestamps are within the operating system timing threshold, always use those values directly as a timestamp, and do not apply the regular jump checks and timing adjustments that normally occur. This potentially fixes an issue with plugins that use OS timestamps directly as timestamp values for their audio samples, and bypasses the timing conversions to system time for the audio line and uses it directly as the timestamp value. It prevents those calculations from potentially affecting the audio timestamp value when OS timestamps are used. For example, if the first set of audio samples from the audio source came in delayed, while the subsequent samples were not delayed, those first samples could have potentially inadvertently triggered the timing adjustments, which would affect all subsequent audio samples.
-
由 jp9000 提交于
This combines the 'direct' timestamp variance threshold with the maximum timestamp jump threshold (or rather just removes the max timestamp jump threshold and uses the timestamp variance threshold for both timestamp jumps and detecting timestamps). The reason why this was done was because a timestamp jump could occur at a higher threshold than the threshold used for detecting OS timestamps within a certain threshold. If timestamps got between those two thresholds it kind of became a weird situation where timestamps could be sort of 'stuck' back or forward in time more than intended. Better to be consistent and use the same threshold for both values.
-
由 jp9000 提交于
This allows the user to select whether to use unbuffered video or not. Unbuffered video cause the video frames to play back as soon as they're received, rather than be buffered and attempt to play them back according to the timestamp value of each frame.
-
由 jp9000 提交于
Add 'flags' member variable to obs_source_frame structure. The OBS_VIDEO_UNBUFFERED flags causes the video to play back as soon as it's received (in the next frame playback), causing it to disregard the timestamp value for the sake of video playback (however, note that the video timestamp is still used for audio synchronization if audio is present on the source as well). This is partly a convenience feature, and partly a necessity for certain plugins (such as the linux v4l plugin) where timestamp information for the video frames can sometimes be unreliable.
-
由 jp9000 提交于
-
由 jp9000 提交于
70 milliseconds is a bit too high for the default audio timestamp smoothing threshold. The full range of error thus becomes 140 milliseconds, which is a bit more than necessary to worry about. For the time being, I feel it may be worth it to try 50 milliseconds.
-
- 20 10月, 2014 7 次提交
- 19 10月, 2014 1 次提交
-
-
由 Jim 提交于
Update Ubuntu PPA install instructions
-