- 31 1月, 2016 14 次提交
-
-
由 jp9000 提交于
-
由 jp9000 提交于
Sometimes the A and B sources of a transition would a large difference in their timestamps, and the calculation of where to start the audio data for one of the sources could be above the tick size, which could cause a crash.
-
由 jp9000 提交于
If the circular audio buffer of the source has data remaining that's less than the audio frame tick count (1024 frames), it would just leave that audio data on the source without discarding it. However, this could cause audio buffering to increase unnecessarily under certain circumstances (when the next audio timestamp is within the timestamp jump window), so it would append data to that circular buffer despite the audio stopping that long ago, causing audio buffering to have to increase to compensate. Instead, just discard pending audio if it hasn't been written to. In other words, if the audio has stopped and there's insufficient audio left to continue processing.
-
由 jp9000 提交于
With the new audio subsystem, audio buffering is minimal at all times. However, when the audio buffering is too small or non-existent, it would cause the audio encoders to start with a timestamp that was actually higher than the first video frame timestamp. Video would have some inherent buffering/delay, but then audio could return and encode almost immediately. This created a possible window of empty time between the first encoded video packet and the first encoded audio packet, where as audio buffering would cause the first audio packet's timestamp to always be way before the first video packet's timestamp. It would then incorrectly assume the two starting points were in sync. So instead of assuming the audio data is always first, this patch makes video wait for audio data comes in, and conversely buffers audio data until video comes in, and tries to find a starting point within that video data instead, ensuring a synced starting point whether audio buffering is active or not.
-
由 jp9000 提交于
When starting a multi-track output, attempt to pair the video encoder with one of the audio encoders to ensure that the video and audio encoders start as close together in time as possible. This ensures the best possible audio/video syncing point when using multi-track audio output.
-
由 jp9000 提交于
When using multi-track audio, encoders cannot be paired like they can when only using a single audio track with video, so it has to choose the best point in the interleaved buffer as the "starting point", and if the encoders start up at different times, it has to prune that data and wait to start the output on the next video keyframe. When the audio encoders started up, there was the case where the encoders would take some time to load, and it would cause the pruning code to wait for the next keyframe to ensure startup syncing. Starting the audio encoders before starting the video encoder should reduce the possibility of that happening in a multi-track scenario.
-
由 jp9000 提交于
In a multi-track scenario it was not taking in to consideration the possibility of secondary audio tracks, which could have caused desync on some of the audio tracks.
-
由 jp9000 提交于
Gets the index of the first interleaved packet of a given/type and audio index
-
由 jp9000 提交于
-
由 jp9000 提交于
audio_input_buf should never be accessed outside of audio_buf_mutex.
-
由 jp9000 提交于
The seamless audio looping code would erroneously trigger for things that weren't loops, causing the audio data to continually push back and ignore timestamps, thus going out of sync. There does need to be loop handling code, but due to the fact that other things may need to trigger this code, it's best just to clear the audio data and start from a fresh sync point. Unfortunately for the case of loops, this means the window in which audio data loops and video frames loop need to be muted.
-
由 jp9000 提交于
This is an additional method/helper that prevents composite sources from treating non-audio sources as audio sources.
-
由 jp9000 提交于
Fixes an issue where audio data would not be popped if they were not activated/presenting. This would cause the audio subsystem to needlessly buffer when they were reactivated again. Rendering all audio sources (excuding composite/filter sources) helps ensure that audio data is always popped and not left to pile up.
-
由 jp9000 提交于
When the new UI was put in, the action shortcuts were unintentionally removed from the list boxes.
-
- 30 1月, 2016 2 次提交
-
-
由 Jim 提交于
UI: Fix GCC 4.8 compilation issue
-
由 boombatower 提交于
Apparently using QPointer container for the QObject being given to the connect function makes it so it can't find the actual function to use. I'm guessing this is incomplete functionality or a bug that existed in GCC 4.8. Doesn't happen in 4.9+.
-
- 28 1月, 2016 3 次提交
- 27 1月, 2016 21 次提交
-
-
由 jp9000 提交于
Implements transitions, and introduces "Studio Mode" which allows live editing of the same or different scenes while preserving what's currently being displayed. Studio Mode offers a number of new features: - The ability to edit different scenes or the same scene without modifying what's currently being displayed (of course) - The ability to set up "quick transitions" with a desired transition and duration that can be assigned hotkeys - The option to create full copies of all sources in the program scene to allow editing of source properties of the same scene live without modifying the output, or (by default) just use references. (Note however that certain sources cannot be duplicated, such as capture sources, media sources, and device sources) - Swap Mode (enabled by default) which swaps the program scene with the preview scene when a transition completes Currently, only non-configurable transitions (transitions without properties) are listed, and the only transitions available as of this writing are fade and cut. In future versions more transitions will be added, such as swipe, stingers, and many other various sort of transitions, and the UI will support being able to add/configure/remove those sort of configurable transitions.
-
由 jp9000 提交于
Makes signals use stack memory rather than allocate memory each time. Most likely a completely insignificant and pointless optimization.
-
由 jp9000 提交于
A comment that serves as a reminder to anyone who might need to edit the scene code. If the graphics mutex must be locked, it must be locked first before entering the scene mutexes, or outside of the scene mutexes.
-
由 jp9000 提交于
This fixes an age-old issue where audio samples could be lost or audio could temporarily go out of sync in the case of looping videos. When audio/video data is looping, there's a window between when the audio data resets its timestamp value and when the video data resets its timestamp value. This method simply pushes back the audio data while in that window and does not modify sync, and when it detects that its out of the loop window it simply forces a resync of the audio data in the circular buffer. This ensures that minimal audio data is lost in the loop process, and minimizes the likelihood of any sort of sync issues associated with looping.
-
由 jp9000 提交于
Apply user sync offset *after* timestamp smoothing, not before. Prevents small or gradual sync offsets from not being properly applied.
-
由 jp9000 提交于
Instead of applying the resampler offset right away (to each audio packet), apply the resampler offset when the timestamps are converted to system timestamps. This fixes an issue where if audio timestamps reset to 0 (for whatever reason), the offset would cause the timestamp to go in to the negative.
-
由 jp9000 提交于
(Note: This commit also modifies the UI) Allows the ability to duplicate sources fully copied, and/or have the scene and its duplicates be private sources
-
由 jp9000 提交于
Creates a scene marked as a private source
-
由 jp9000 提交于
Allows full duplication of sources (with exception of sources that are marked with the OBS_SOURCE_DO_NOT_DUPLICATE output capability flag)
-
由 jp9000 提交于
Certain types of sources (display captures, game captures, audio device captures, video device captures) should not be duplicated. This capability flag hints that the source prefers references over full duplication.
-
由 jp9000 提交于
Mostly only used for transitions with the intention of automatically creating transitions which don't require configuration, returns whether the source has any properties or not (whether it's configurable)
-
由 jp9000 提交于
Changes it to use obs_context_by_name
-
由 jp9000 提交于
Allows creation of private/unlisted sources/outputs/services/encoders
-
由 jp9000 提交于
The intention of this is to allow sources/outputs/etc to be created without being visible to the UI or save/load functions.
-
由 jp9000 提交于
(Note: This commit also modifies UI) Instead of using signals, use designated callback lists for audio capture and audio control helpers. Signals aren't suitable here due to the fact that signals aren't meant for things that happen every frame or things that happen every time audio/video is received. Also prevents audio from being allocated every time these functions are called due to the calldata structure.
-
由 jp9000 提交于
-
由 jp9000 提交于
A basic fade transition that fades to/from a source via a simple cross fade.
-
由 jp9000 提交于
Transition sources are implemented by registering a source type as OBS_SOURCE_TYPE_TRANSITION. They're automatically marked as video composite sources, and video_render/audio_render callbacks must be set when registering the source. get_width and get_height callbacks are unused for these types of sources, as transitions automatically handle width/height behind the scenes with the transition settings. In the video_render callback, the helper function obs_transition_video_render is used to assist in automatically processing and rendering the audio. A render callback is passed to the function, which in turn passes to/from textures that are automatically rendered in the back-end. Similarly, in the audio_render callback, the helper function obs_transition_audio_render is used to assist in automatically processing and rendering the audio. Two mix callbacks are used to handle how the source/destination sources are mixed together. To ensure the best possible quality, audio processing is per-sample. Transitions can be set to automatically resize, or they can be set to have a fixed size. Sources within transitions can be made to scale to the transition size (with or without aspect ratio), or to not scale unless they're bigger than the transition. They can have a specific alignment within the transition, or they just default to top-left. These features are implemented for the purpose of extending transitions to also act as "switch" sources later, where you can switch to/from two different sources using the transition animation. Planned (but not yet implemented and lower priority) features: - "Switch" transitions which allow the ability to switch back and forth between two sources with a transitioning animation without discarding the references - Easing options to allow the option to transition with a bezier or custom curve - Manual transitioning to allow the front-end/user to manually control the transition offset
-
由 jp9000 提交于
This callback will no longer be used, instead transitions will modify the audio data directly.
-
由 jp9000 提交于
-
由 jp9000 提交于
These functions aren't used with transition sources, and will be ignored.
-