- 17 11月, 2014 4 次提交
-
-
由 jp9000 提交于
This allows the 0x* format of hexadecimal integers to be used as config integers.
-
由 jp9000 提交于
This allows opening ini config data from a string. Before, it was only possible to load config from a file.
-
由 jp9000 提交于
This function simplifies drawing textures for sources in order to help reduce boilerplate code. If a source is a custom drawn source, it will automatically set up the effect to draw the sprite. If it's not a custom drawn source, it will simply draw the sprite as per normal. If the source uses a specific color matrix, it will also handle that as well.
-
由 fryshorts 提交于
This replaces the call to fmaxf with the equivalent inline comparison which is a bit faster (at least on linux).
-
- 15 11月, 2014 1 次提交
-
-
由 Christoph Hohmann 提交于
When the image data is copied into a texture with flipping set to true each row has to be copied into the (height - row - 1)th row instead of the row with the same number. Otherwise it will just create an unflipped copy.
-
- 04 11月, 2014 2 次提交
-
-
由 jp9000 提交于
-
由 jp9000 提交于
Apparently the audio isn't guaranteed to start up past the first video frame, so it would trigger that assert (which I'm glad I put in). I didn't originally have this happen when I was testing because my audio buffering was not at the default value and didn't trigger it to occur. A blunder on my part, and once again a fine example of how you should never make assumptions about possible code path.
-
- 03 11月, 2014 1 次提交
-
-
由 jp9000 提交于
-
- 29 10月, 2014 4 次提交
- 24 10月, 2014 2 次提交
-
-
由 jp9000 提交于
This moves the 'flags' variable from the obs_source_frame structure to the obs_source structure, and allows user flags to be set for a specific source. Having it set on the obs_source_frame structure didn't make much sense. OBS_SOURCE_UNBUFFERED makes it so that the source does not buffer its async video output in order to try to play it on time. In other words, frames are played as soon as possible after being received. Useful when you want a source to play back as quickly as possible (webcams, certain types of capture devices)
-
- 23 10月, 2014 11 次提交
-
-
由 jp9000 提交于
This bug would happen if audio packets started being received before video packets. It would erroneously cause audio packets to be completely thrown away, and in certain cases would cause audio and video to start way out of sync. My original intention was "don't accept audio until video has started", but instead mistakenly had the effect of "don't start audio until a video packet has been received". This was originally was intended as a way to handle outputs hooking in to active encoders and compensating their existing timestamp information. However, this made me realize that there was a major flaw in the design for handling this, so I basically rewrote the entire thing. Now, it does the following steps when inserting packets: - Insert packets in to the interleaved packet array - When both audio/video packets are received, prune packets up until the point in which both audio/video start at the same time - Resort the interleaved packet array I have tested this code extensively and it appears to be working well, regardless of whether or not the encoders were already active with another output.
-
由 jp9000 提交于
Did not multiply the element size for the memory move operation.
-
由 jp9000 提交于
In video-io.c, video frames could skip, but what would happen is the frame's timestamp would repeat for the next frame, giving the next frame a non-monotonic timestamp, and then jump. This could mess up syncing slightly when the frame is finally given to an outputs.
-
由 jp9000 提交于
Apparently I unintentionally typed received_video = false twice instead of one for video and one for audio. This fixes a bug where audio would not start up again on an output that had recently started and then stopped.
-
由 jp9000 提交于
When the output sets a new audio/video encoder, it was not properly removing itself from the previous audio/video encoders it was associated with. It was erroneously removing itself from the encoder parameter instead.
-
由 jp9000 提交于
At the start of each render loop, it would get the timestamp, and then it would then assign that timestamp to whatever frame was downloaded. However, the frame that was downloaded was usually occurred a number of frames ago, so it would assign the wrong timestamp value to that frame. This fixes that issue by storing the timestamps in a circular buffer.
-
由 jp9000 提交于
If audio timestamps are within the operating system timing threshold, always use those values directly as a timestamp, and do not apply the regular jump checks and timing adjustments that normally occur. This potentially fixes an issue with plugins that use OS timestamps directly as timestamp values for their audio samples, and bypasses the timing conversions to system time for the audio line and uses it directly as the timestamp value. It prevents those calculations from potentially affecting the audio timestamp value when OS timestamps are used. For example, if the first set of audio samples from the audio source came in delayed, while the subsequent samples were not delayed, those first samples could have potentially inadvertently triggered the timing adjustments, which would affect all subsequent audio samples.
-
由 jp9000 提交于
This combines the 'direct' timestamp variance threshold with the maximum timestamp jump threshold (or rather just removes the max timestamp jump threshold and uses the timestamp variance threshold for both timestamp jumps and detecting timestamps). The reason why this was done was because a timestamp jump could occur at a higher threshold than the threshold used for detecting OS timestamps within a certain threshold. If timestamps got between those two thresholds it kind of became a weird situation where timestamps could be sort of 'stuck' back or forward in time more than intended. Better to be consistent and use the same threshold for both values.
-
由 jp9000 提交于
Add 'flags' member variable to obs_source_frame structure. The OBS_VIDEO_UNBUFFERED flags causes the video to play back as soon as it's received (in the next frame playback), causing it to disregard the timestamp value for the sake of video playback (however, note that the video timestamp is still used for audio synchronization if audio is present on the source as well). This is partly a convenience feature, and partly a necessity for certain plugins (such as the linux v4l plugin) where timestamp information for the video frames can sometimes be unreliable.
-
由 jp9000 提交于
-
由 jp9000 提交于
70 milliseconds is a bit too high for the default audio timestamp smoothing threshold. The full range of error thus becomes 140 milliseconds, which is a bit more than necessary to worry about. For the time being, I feel it may be worth it to try 50 milliseconds.
-
- 19 10月, 2014 7 次提交
- 14 10月, 2014 5 次提交
-
-
由 Palana 提交于
This should fix blending issues on stream/recording that weren't visible in the preview
-
由 jp9000 提交于
Instead of limiting this to an apple-specific define, just limit it to OpenGL -- because this particular effect file cannot be used with Direct3D.
-
由 jp9000 提交于
The graphics subsystem was not being freed here, for example if a required effect failed to compile it would still successfully have the graphics subsystem sans required effect. The graphics subsystem should be completely shut down if required libobs effects fail to compile.
-
由 jp9000 提交于
-
由 jp9000 提交于
-
- 13 10月, 2014 1 次提交
-
-
由 jp9000 提交于
This was sorely needed for debugging stream issues.
-
- 12 10月, 2014 2 次提交