- 22 2月, 2014 2 次提交
-
-
由 jp9000 提交于
- Remove obs_source::type because it became redundant now that the type is always stored in the obs_source::info variable. - Apply presentation volumes of 1.0 and 0.0 to sources when they activate/deactivate, respectively. It also applies that presentation volume to all sub-sources, with exception of transition sources. Transition sources must apply presentation volume manually to their sub-sources with the new transition functions below. - Add a "transition_volume" variable to obs_source structure, and add three functions for handling volume for transitions: * obs_transition_begin_frame * obs_source_set_transition_vol * obs_transition_end_frame Because the to/from targets of a transition source might both contain some of the same sources, handling the transitioning of volumes for that specific situation becomes an issue. So for transitions, instead of modifying the presentation volumes directly for both sets of sources, we do this: - First, call obs_transition_begin_frame at the beginning of each transition frame, which will reset transition volumes for all sub-sources to 0. Presentation volumes remain unchanged. - Call obs_source_set_transition_vol on each sub-source, which will then add the volume to the transition volume for each source in that source's tree. Presentation volumes still remain unchanged. - Then you call obs_trandition_end_frame when complete, which will then finally set the presentation volumes to the transition volumes. For example, let's say that there's one source that's within both the "transitioning from" sources and "transition to" sources. It would add both the fade in and fade out volumes to that source, and then when the frame is complete, it would set the presentation volume to the sum of those two values, rather than set the presentation volume for that same source twice which would cause weird volume jittering and also set the wrong values.
-
由 jp9000 提交于
-
- 21 2月, 2014 4 次提交
-
-
由 jp9000 提交于
Now sources will be properly activated and deactivated when they are in use or not in use. Had to figure out a way to handle child sources, and children of children, just ended up implementing simple functions that parents use to signal adding/removal to help with hierarchial activation and deactivation of child sources. To prevent the source activate/deactivate callbacks from being called more than once, added an activation reference counter. The first increment will call the activate callback, and the last decrement will call the deactivate callback. Added "source-activate" and "source-deactivate" signals to the main obs signal handler, and "activate" and "deactivate" to individual source signal handlers. Also, fixed the main window so it properly selects a source when the current active scene has been changed.
-
由 jp9000 提交于
-
由 jp9000 提交于
-
由 jp9000 提交于
Added a "master" volume for the entire audio subsystem. Also, added a "presentation" volume for both the master volume and for each invidiaul source. The presentation volume is used to control things like transitioning volumes, preventing sources from outputting any audio when they're inactive, as well as some other uses in the future.
-
- 19 2月, 2014 4 次提交
-
-
由 jp9000 提交于
If audio was under, it originally did a full reset of the audio timing. However, resetting the audio timing when this happens is kind of a bad thing. It's better just to clamp the value to the expected timestamp to ensure seamless audio output. Also, implement audio timestamp smoothing to ensure audio tries to be as seamless as possible.
-
由 jp9000 提交于
I actually did compile that last commit and misread the failed projects as 0. I'm just going to put the conversion stuff in video-io.h stuff because it requires it anyway, and video-scaler.h already depends on video-io.h for the video_format enum anyway.
-
由 jp9000 提交于
Had a bit of an include loop going on there which caused this issue to occur.
-
由 jp9000 提交于
Add a scaler interface (defaults to swscale), and if a separate output wants to use a different scale or format than the default output format, allow a scaler instance to be created automatically for that output, which will then receive the new scaled output.
-
- 18 2月, 2014 4 次提交
-
-
由 Palana 提交于
av_frame_alloc, av_frame_free, among others, live in libavutil
-
由 jp9000 提交于
If there are for example more than one audio outputs and they have different sample rates or channels and such, this will allow automatic conversion of that audio to the request formats/channels/rates (but only if requested).
-
由 jp9000 提交于
-
由 jp9000 提交于
Turns out that on some adapters, due to some sort of internal GPU precision error, fmod(x, y) can return x when x == y, wich is incorrect (and no, they were actually equal, not off due to precision errors). This would cause the shader to sample wrong coordinates on the edges sometimes. Just adding 0.1 to the x value before being put in to fmod and then flooring the result after fixes the issue.
-
- 17 2月, 2014 3 次提交
-
-
由 jp9000 提交于
Also, fixed an issue with the new conversion shader not compiling properly on some video devices
-
由 jp9000 提交于
-
由 jp9000 提交于
- Changed glMapBuffer to glMapBufferRange to allow invalidation. Using just glMapBuffer alone was causing some unacceptable stalls. - Changed dynamic buffers from GL_DYNAMIC_WRITE to GL_STREAM_WRITE because I had misunderstood the OpenGL specification - Added _OPENGL and _D3D11 builtin preprocessor macros to effects to allow special processing if needed - Added fmod support to shaders (NOTE: D3D and GL do not function identically with negative numbers when using this. Positive numbers however function identically) - Created a planar conversion shader that converts from packed YUV to planar 420 right on the GPU without any CPU processing. Reduces required GPU download size to approximately 37.5% of its normal rate as well. GPU usage down by 10 entire percentage points despite the extra required pass.
-
- 16 2月, 2014 2 次提交
- 15 2月, 2014 6 次提交
-
-
由 jp9000 提交于
-
由 jp9000 提交于
Staging surfaces with GL originally copied to a texture and then downloaded that copied texture, but I realized that there was really no real need to do that. Now instead they'll copy directly from the texture that's given to them rather than copying to a buffer first. Secondly, hopefully fix the mac issue where the only way to perform an asynchronous texture download is via FBOs and glReadPixels. It's a really dumb issue with macs and the amount of "gotchas" and non-standard internal GL functionaly on mac is really annoying.
-
由 jp9000 提交于
...Screw it. %llu it is.
-
由 jp9000 提交于
...I neglected to put a '%' character before using the PRI* macros.
-
由 jp9000 提交于
-
由 jp9000 提交于
There were a *lot* of warnings, managed to remove most of them. Also, put warning flags before C_FLAGS and CXX_FLAGS, rather than after, as -Wall -Wextra was overwriting flags that came before it.
-
- 14 2月, 2014 2 次提交
- 13 2月, 2014 5 次提交
-
-
由 jp9000 提交于
Originally, the rendering system was designed to only display sources and such, but I realized there would be a flaw; if you wanted to render the main viewport in a custom way, or maybe even the entire application as a graphics-based front end, you wouldn't have been able to do that. Displays have now been separated in to viewports and displays. A viewport is used to store and draw sources, a display is used to handle draw callbacks. You can even use displays without using viewports to draw custom render displays containing graphics calls if you wish, but usually they would be used in combination with source viewports at least. This requires a tiny bit more work to create simple source displays, but in the end its worth it for the added flexibility and options it brings.
-
由 jp9000 提交于
- Add bit more documentation to some of the main headers - Fix alignment for the size of the name string of a obs_data_item.
-
由 jp9000 提交于
-
由 jp9000 提交于
Used the wrong function name with the video_render value. This is what I get for not compiling it on mac first.
-
由 jp9000 提交于
-
- 12 2月, 2014 2 次提交
-
-
由 jp9000 提交于
-
由 jp9000 提交于
The API used to be designed in such a way to where it would expect exports for each individual source/output/encoder/etc. You would export functions for each and it would automatically load those functions based on a specific naming scheme from the module. The idea behind this was that I wanted to limit the usage of structures in the API so only functions could be used. It was an interesting idea in theory, but this idea turned out to be flawed in a number of ways: 1.) Requiring exports to create sources/outputs/encoders/etc meant that you could not create them by any other means, which meant that things like faruton's .net plugin would become difficult. 2.) Export function declarations could not be checked, therefore if you created a function with the wrong parameters and parameter types, the compiler wouldn't know how to check for that. 3.) Required overly complex load functions in libobs just to handle it. It makes much more sense to just have a load function that you call manually. Complexity is the bane of all good programs. 4.) It required that you have functions of specific names, which looked and felt somewhat unsightly. So, to fix these issues, I replaced it with a more commonly used API scheme, seen commonly in places like kernels and typical C libraries with abstraction. You simply create a structure that contains the callback definitions, and you pass it to a function to register that definition (such as obs_register_source), which you call in the obs_module_load of the module. It will also automatically check the structure size and ensure that it only loads the required values if the structure happened to add new values in an API change. The "main" source file for each module must include obs-module.h, and must use OBS_DECLARE_MODULE() within that source file. Also, started writing some doxygen documentation in to the main library headers. Will add more detailed documentation as I go.
-
- 11 2月, 2014 6 次提交
-
-
由 jp9000 提交于
It wasn't properly being set for the graphics textures as well, so it would end up causing SSE to crash due to alignment
-
由 jp9000 提交于
Note to self - compile before committing.
-
由 jp9000 提交于
The 444->420 conversion function will crash if you don't align the output width to a 16 byte boundry.
-
由 jp9000 提交于
-
由 jp9000 提交于
The codecs used for AVI seem to have less of an impact than the MP4 codecs at the moment.
-
由 jp9000 提交于
It will now output to a a filename that you specify when you click the record button. This is just for testing.
-