Currently several shaders need "DrawMatrix" techniques to support the
possibility that the input texture is a "YUV" format. Also, "DrawMatrix"
is overloaded for translation in both directions when it is written for
RGB to "YUV" only.
A cleaner solution is to handle "YUV" to RGB up-front as part of format
conversion, and ensure only RGB inputs reach the other shaders. This is
necessary to someday perform correct scale filtering without the cost of
redundant "YUV" conversions per texture tap.
A necessary prerequisite for this is to add conversion support for
VIDEO_FORMAT_I444, and that is now in place. There was already a hack in
place to cover VIDEO_FORMAT_Y800. All other "YUV" formats already have
conversion functions.
"DrawMatrix" has been removed from shaders that only supported "YUV" to
RGB conversions. It still exists in shaders that perform RGB to "YUV"
conversions, and the implementations have been sanitized accordingly.
This new scale filter computes pixels by weighing the coverage area of
source pixels over the target pixel. This algorithm works well for both
upsampling and downsampling, but was mainly designed to upscale
high-quality low-resolution sources like RGB/HDMI retro consoles. I've
heard of people using odd workarounds like scaling up to very high
resolutions before scaling back down to preserve pixel shartpness. This
algorithm directly addresses this use-case in a much more direct
fashion.
The Area scale filter does a better job of preserving the thickness of
thin features than the Point filter.
The Area scale filter does not look at source pixels that lie outside
of the target pixel, leading to a much sharper image than Bilinear,
Bicubic, and Lanczos filters.
This filter should interpolate pixels in linear space, but OBS is not
equipped to do that at the moment.
libobs: Add GPU effect, and wire up scene serialization.
obs-filters: Add Area as an option for scale_filter.
UI: Add Area as an option for both scene items, and canvas downscaling.
Adds display_duration declaring the minimum duration a caption text
is not going to be overwritten by a new one. To keep the functions
backwards-compatible obs_output_output_caption_text2 was added while
obs_output_output_caption_text1 continues having a 2 second default.
Allows the ability to encode by passing NV12 textures. This uses a
separate thread for texture-based encoders with a small queue of
textures. An output texture with a keyed mutex shared texture is locked
between OBS and each encoder. A new encoder callback and capability
flag is used to encode with textures.
This splits the "do_encode" function in to "do_encode" and
"send_off_encoder_packet", the latter of which allows the ability for
texture-based encoders to manage their own encoding and just simply send
off a packet to the outputs.
Allows the ability for one encoder to defer to another in case of
failure or unsupported feature. Okay, fine, it's mostly a hack so the
new NVENC encoder can fall back to the FFmpeg encoder if NV12 textures
aren't in use, that way it does not have to implement raw fallback
support itself. The settings and properties are pretty much the same,
so there's no reason not to utilize it in order to save time that could
otherwise be spent more productively.
Reduces GPU usage when encoding is not active. Does not perform color
conversion, frame staging, or frame downloading unless encoding is
explicitly active.
Because it would be troublesome to add the ability to remove source
types (in case for example a script fails to reload), instead make it so
source types can be temporarily disabled while the program is running.
This is to prevent confusion with video_thread in
libobs/media-io/video-io.c, which is used exclusively for video
encoding/output. Also prevents confusion in the profiler log data.
Decoupling the audio from the video causes the audio to be played right
when it's received rather than attempt to sync up to the video frames.
This is useful with certain async sources/devices when the audio/video
timestamps are not reliable.
Naturally because it plays audio right when it's received, this should
only be used when the async source is operating in unbuffered mode,
otherwise the video frame timing will be out of sync by the amount of
buffering the video currently has.
This allows the ability for certain types of modules (particularly
scripting-related modules) to initialize extra data when all other
modules have loaded. Because front-ends may wish to have custom
handling for loading modules, the front-end must manually call
obs_post_load_modules after it has completed loading all plug-in
modules.
Closesjp9000/obs-studio#965
(This commit also modifies the decklink, linux-v4l2, mac-avcapture,
obs-ffmpeg, and win-dshow modules)
Originally, async buffering for sources was supposed to be a
user-controllable flag. However, that turned out to be less than ideal
because sources (such as the win-dshow plugin) were programmed with
automatic control over their buffering (such as automatically detecting
USB 2.0 capture devices and then enabling in those cases).
The fact that it was a flag caused a design flaw to where buffering
values would be overwritten when a source is loaded from save data.
Because of that, this flag is being deprecated and replaced with a
specific function to enable unbuffered mode instead.
Originally, obs_get_video_info would recreate the obs_video_info
structure that was originally passed to it from obs_reset_video. This
changes that to just store a copy of the obs_video_info when calling
obs_reset_video, and then copying that to the parameter of
obs_get_video_info when called.
When frames are skipped the skipped frame count would increment, but the
total frame count would not increment, causing the percentage
calculation to fail.
Additionally, the skipped frames log reporting has been moved to
media-io/video-io.c instead of each output.
Adds functions to turn on audio monitoring to allow the user to hear
playback of an audio source over the user's speaker. It can be set to
turn off monitoring and only output to stream, or it can be set to
output only to monitoring, or it can be set to both.
On windows, audio monitoring uses WASAPI. Windows also is capable of
syncing the audio to the video according to when the video frame itself
was played.
On mac, it uses AudioQueue.
On linux, it's not currently implemented and won't do anything (to be
implemented).
Because D3D11 specifically does not support an L8 texture format (you
have to use a shader swizzle), manually convert Y800 signals to RGBX
instead. This also fixes a bug where Y800 signals will render red.
Closesjp9000/obs-studio#718
For displays, instead of using the draw_callbacks_mutex and risk a
reverse mutual lock scenario, use a separate mutex to lock display size
data.
This bug was exposed when trying to reorder filters in the UI module.
The UI thread would try to reorder the filters, locking the filter mutex
of the source, and then the reorder would signal the UI to resize the
display, so the display would lock its draw_callbacks_mutex. Then, in
the graphics thread, it would lock the display's draw_callbacks_mutex,
try to draw the source, and then the source would try to lock that same
filter mutex.
A mutex trace:
UI thread -> lock source filter mutex -> waiting on display mutex
graphics thread -> lock display mutex -> waiting on source filter mutex
Closesjp9000/obs-studio#714
If an async source is cropped on one side, then when the program is
restarted and the source is loaded from file, the async source will
start out with a width/height of zero. This will cause the async source
to not be drawn if cropping or scale filtering is added to the scene
item, because it has to be rendered to a texture first. However, the
source cannot reset its size until it's drawn, so it leaves it in
perpetual state of having a 0x0 size.
This fixes that problem by ensuring that the async source size is always
reset even when not being rendered.
Closejp9000/obs-studio#686
Allows getting the current active framerate that the core is rendering
with. This takes in to account any rendering lag or stalls that may be
occurring.
Allows the ability to use scale filters such as point, bicubic, lanczos
on specific scene items, disabled by default. When using one of the
latter two options, if the item's scale is under half of the source's
original size, it uses the bilinear low resolution downscale shader
instead.
(Note: This commit also modifies obs-ffmpeg and obs-outputs)
API Changed:
obs_output_info::void (*stop)(void *data);
To:
obs_output_info::void (*stop)(void *data, uint64_t ts);
This fixes the long-time design flaw where obs_output_stop and the
output 'stop' callback would just shut down the output without
considering the timing of when obs_output_stop was used, discarding any
possible buffering and causing the output to get cut off at an
unexpected timing.
The 'stop' callback of obs_output_info now takes a timestamp with the
expectation that the output will use that timestamp to stop output data
in accordance to that timing. obs_output_stop now records the timestamp
at the time that the function is called and calls the 'stop' callback
with that timestamp. If needed, obs_output_force_stop will still stop
the output immediately without buffering.
(Note: Also modified the obs-ffmpeg plugin module)
Allows the ability for frame data to pass 8-bit grayscale images (Y800
color format).
Closesjp9000/obs-studio#515
Adds deinterlacing API functions. Both standard and 2x variants are
supported. Deinterlacing is set via obs_source_set_deinterlace_mode and
obs_source_set_deinterlace_field_order.
This was implemented in to the core itself because deinterlacing should
happen before effect filters are processed, but after async filters are
processed. If this were added as a filter, there is the possibility
that a different filter is processed before deinterlacing, which could
mess with the result. It was also a bit easier to implement this way
due to the fact that that deinterlacing may need to have access to the
previous async frame.
Effects were split in to separate files to reduce load time (especially
for yadif shaders which take a significant amount of time to compile).
Instead of just updating the async texture variables directly in the
source, allow the ability to pass the async texture variables via
function parameters to allow the ability to parse more than one frame to
more than one texture.
This code is primarily intended to be used to upload/convert the
"previous" async frame for the deinterlacer (if necessary).
Just creates an effect to the target variable only if its current value
is null. This will be used for deinterlacing effects to prevent having
to compile the shaders unless they're actually being used.