- 22 6月, 2015 12 次提交
-
-
由 jp9000 提交于
If the recording format is unsupported or doesn't support multiple tracks, warn the user to check their settings.
-
由 jp9000 提交于
-
由 jp9000 提交于
-
由 jp9000 提交于
Add a listboxes to recording to allow changing file formats (allows mp4, mov, mkv, ts, flv)
-
由 jp9000 提交于
This allows the ability to save different container formats, such as mp4, mkv, and ts files.
-
由 jp9000 提交于
In addition to the flv file format, this allows the ability to save to container formats such as mp4, ts, mkv, and any other containers that support the current codecs being used. It pipes the encoded data to the ffmpeg-mux process, which then safely muxes the file from the encoded data. If the main program unexpectedly terminates, the ffmpeg-mux piped program will safely close the file and write trailer data, preventing file corruption.
-
由 jp9000 提交于
-
由 jp9000 提交于
This allows an output to specify that the format or track count is unsupported.
-
由 jp9000 提交于
Allows obs-encoder callbacks to remove themselves while in a callback.
-
由 jp9000 提交于
Allows audio-io callbacks to remove themselves while in a callback.
-
由 jp9000 提交于
-
由 jp9000 提交于
The size parameter is the size of the elements, not the size of the data. The size parameter should be 1, and the elements should be the number of bytes. The reason why I'm making this change is because the fread/fwrite would fail when the parameters were swapped.
-
- 21 6月, 2015 3 次提交
-
-
由 jp9000 提交于
When the editable list property was added, I added a case to a switch statement without putting a break on the case before it.
-
由 jp9000 提交于
-
由 jp9000 提交于
This is mostly so that it can be used with the scroll filter, which uses identical shaders, but needs to wrap the texture instead of clamp.
-
- 20 6月, 2015 1 次提交
-
-
由 Night 提交于
You should really auto-update these lists from Twitch API. Closes jp9000/obs-studio#440
-
- 15 6月, 2015 1 次提交
-
-
由 Jim 提交于
UI: Fix settings reload selecting wrong fmt/codec
-
- 12 6月, 2015 2 次提交
- 11 6月, 2015 2 次提交
-
-
由 John Bradley 提交于
-
由 jp9000 提交于
Instead of using system timestamps for playback, use the timestamps directly from the video/audio data to ensure all the data is synced up using the obs_source back-end. I think the original misconception when this was written was that OBS would not handle timestamp resets/loops, which isn't the case; it will actually handle all timestamp resets and abnormalities. It's always best to use the obs_source back-end for all playback and syncing.
-
- 10 6月, 2015 1 次提交
-
-
由 Jim 提交于
Use correct output folder for Show/Remux
-
- 07 6月, 2015 7 次提交
-
-
由 Palana 提交于
When the bitrate was set to 64 CoreAudio would call complex_input_data_proc more than once, which in turn would cause consumed bytes in the input buffer to be "freed" more than once (once for every additional call of complex_input_data_proc and once in aac_encode)
-
由 Palana 提交于
-
由 Palana 提交于
-
由 Palana 提交于
-
由 Palana 提交于
-
由 Palana 提交于
These notifications can happen with e.g. the Japanese Kotoeri layout when switching between active windows
-
由 Palana 提交于
In the (unlikely) event of multiple concurrent calls to input_method_changed it was possible that the log messages would appear out of order with respect to which layout would actually be active after the last log message
-
- 06 6月, 2015 2 次提交
-
-
由 jp9000 提交于
This allows the ability to output the audio of the device as desktop audio (via the WaveOut or DirectSound audio renderers) instead of capturing the audio only. In the future, we'll implement audio monitoring which will make this feature obsolete, but for the time being I decided to add this option as a temporary measure to allow users to play the audio from their devices via the DirectShow output.
-
由 jp9000 提交于
-
- 05 6月, 2015 8 次提交
-
-
由 jp9000 提交于
The audio bitrate required is insignificant relative to the video bitrate, and due to the fact that it's possible that a lower-quality encoder may be in use (such as FFmpeg's AAC encoder), setting the default to 160 is really more ideal to reducee any potential quality loss.
-
由 jp9000 提交于
Due to the fact that async timestamps themselves can be susceptible to minor jitter from certain types of inputs, increase the allowable jitter compensation value to ensure that the rendered frame timing from async video sources is always as close as possible to the compositor. When the framerate of the source is the same as the framerate as the compositor, this (combined with the fact that clamped video timing now being used with async video frames) helps ensure that buffered async video sources will sync up their rendering to the compositor as accurately as possible despite jitter from the source's timestamps. If there is no jitter in the source's timestamps then it'll always sync up perfectly with the compositor, thanks to clamped video timing.
-
由 jp9000 提交于
When playing back buffered async frames, this reduces the probability that new frames will be missed/skipped due to jitter in the system timestamps. If a buffered async source is playing at the same framerate as the compositor and there is no jitter in the async source's timestamps, then the async source will play back perfectly in sync with the compositor thanks to this change, ensuring that there's no skipped or missed frames in video playback.
-
由 jp9000 提交于
The "clamped" video time is the system time per video frame that is closest to the current system time, but always divisible by the frame interval. For example, if the last frame system timestamp was 1600 and the new frame is 2500, but the frame interval is 800, then the "clamped" video time is 2400. This clamped value is useful to get the relative system time without any jitter.
-
由 jp9000 提交于
When buffering is enabled for an async video source, sometimes minor drift in timestamps or unexpected delays to frames can cause frames to slowly buffer more and more in memory, in some cases eventually causing the system to run out of memory. The circumstances in which this can happen seems to depend on both the computer and the devices in use. So far, the only known circumstances in which this happens are with heavily buffered devices, such as hauppauge, where decoding can sometimes take too long and cause continual frame playback delay, and thus continual buffering until memory runs out. I've never been able to replicate it on any of my machines however, even after hours of testing. This patch is a precautionary measure that puts a hard limit on the number of async frames that can be currently queued to prevent any case where memory might continually build for whatever reason. If it goes over the limit, it clears the cache to reset the buffering. I had a user with this problem test this patch with success and positive feedback, and the intervals between buffering resets were long to where it wasn't even noticeable while streaming/recording. Ideally when decoding frames (such as from those devices), frame dropping should be used to ensure playback doesn't incur extra delay, although this sort of hard limit on the frame cache should still be implemented regardless just as a safety precaution. For DirectShow encoded devices I should just switch to faruton's libff for decoding and enable the frame dropping options. It would probably explain why no one's ever reported it for the media source, and pretty much only from DirectShow device usage.
-
由 Jim 提交于
Modified 16bit to 32bit color conversion to use SSE.
-
由 Bl00drav3n 提交于
-
由 Eric Bataille 提交于
Ensures that the "Show Recordings" an "Remux Recordings" file menu items will open the recordings folder from the currently active output mode rather than always the simple output mode.
-
- 03 6月, 2015 1 次提交
-
-
由 jp9000 提交于
On windows vista/7, you cannot really use display capture efficiently without disabling aero, so this will add an option to settings to allow it to be disabled and cause it to be disabled on startup.
-