We will now keep a per track list of caption data. When caption
insertion is triggered we will add that to the list for each actively
encoding video track. When doing interleaved send, we pull the data
from the caption data for the corresponding video. This ensures that
captions get copied to all video tracks.
Enable all of the previously Windows only paths for OpenGL backends that
support encode_texture2
Co-authored-by: Torge Matthies <openglfreak@googlemail.com>
To prevent sources from having to take too much extra consideration into
threading, defer the media control functions that directly affect
functionality to the video thread. Getters will still have to take
threading into account, but this should make things much more trivial
for media controls thread-wise.
(Lain note: Context: I noticed that things such as the slide show
require mutexes due to their media controls, and I felt that it was
largely unnecessary and that libobs should mitigate the threading issue
itself and keep it all in the video thread like it should be. Again, the
getters are still going to require *some* consideration into threading
but in terms of threading, for the type of stuff we're doing, querying
state is far more trivial to take into consideration.)
(Lain note: This is mostly so more (yes more) can be added to this
god-forsaken structure without it getting too messy. In terms of actual
code, I need to be better about writing actual code comments. ...Meaning
that I actually need to start writing code comments.)
To avoid passing `struct darray *` type, which cannot hold the type
information, this commit defines array types and uses that types on the
function parameters.
Allows rescaling resolution for GPU encoders and allows moving
rescaling for CPU encoders from CPU to GPU
Rescaling is implemented via core video mixes; encoders create
their own core mix with matching width/height/format/colorspace/
range when gpu scaling is enabled and no matching core video
mix exists
Switching to a static library that contains version information as
const char strings has multiple benefits:
* The version information provided externally via compiler definitions
will fail compilation early if malformed
* An updated version string (which will happen with every commit) will
not invalidate existing compilation units, because only the static
library is affected by the change
* An update of the version change just requires a recompilation of the
static library and a linker update
* An update of the version will _not_ infect the rest of the codebase
(as it does currently, because everything includes obsconfig.h one
way or another)
* Other modules which used the macro definition directly have been
updated as much as possible to use the proper getter method from
`libobs` instead (some Windows-specific modules use preprocessor
string composition, the value has been added as a compiler definition
directly in those cases)
* Because the impact of a version change due to a commit hash change
is limited to the static library, ccache hit rates should be
improved considerably
There was quite a bit of conflated usage of mixes (which refers
to raw audio) and encoder counts. This fully separates the two
and makes a distinct separation when iterating over mixes vs
encoders.
Because Intel has wonderful code which forces it to run on the iGPU if
there are both an Intel iGPU and an Intel dGPU on the same system, the
adapter index OBS is set to internally will no longer be valid, thus if
anyone calls `obs_get_video_info()` to try to find out what adapter
index OBS is running on, it will be invalid on those computers.
Wonderful.
So, basically, this code here just fixe it so if you want to call
`obs_get_video_info()`, it'll actually have a valid adapter index now,
that way we can reference the adapter index when determining what GPU
we're actually running on without having to like, do anything super
complicated and silly like comparing adapter GUIDs just to figure out
what adapter OBS is actually runing on. I don't want the code to be a
mess anymore.
(I like how in any other situation on the face of the planet, there's no
need to force OBS to run on an integrated adapter. *Normally* OBS
*should* run on the dedicated adapter, that way it can actually capture
games properly. You can probably guess as to why they're forcing it to
run on the integrated adapter rather than the dedicated adapter. But you
know what? Whatever. I don't really care anymore I guess. Just...
whatever. Here we are I guess. Also I was in a bad mood while writing
this just as a disclaimer.)
(I hate that this commit exist. I hate that the commit c83eaaa51c exists
even more.)
Seems to solve lag encountered on the new AMF encoder. The SubmitInput
call in the AMF encoder can occasionally stall for quite a long amount
of time, but most take microseconds, so we can compensate by simply
increasing the buffering (3 to 10 textures).
This reverts commit 90a409fe58.
Reverts #7077 for now. This really shouldn't be done so close to
release. This crash technically only happens under very niche scenarios,
and the fix seems to have some other potential issues. Prematurely
merged by Jim.
Split render_texture and derived fields in obs_core_video into new
obs_core_video_mix struct. Add new APIs to add additional obs_view to the render loop, each with a separate render_texture / obs_core_video_mix.
- Changes the default base exponent value to 1.5 from 2.0
- Applies a random skew of +-0.05 to the exponent to lessen the
"water hammer" effect caused by predictable backoff techniques
- Fixes the logging associated with exponential backoff to log the
true reconnect delay value
Allows a frontend the ability to set the maximum audio buffering
latency, and specify whether that audio buffering is either fixed (to
the maximum audio buffering latency), or dynamically increasing from 0.
This will be useful if the user wishes to output audio to devices or
through a virtual audio device at a guaranteed minimal latency.