Implement the log_processor_info function on bsd and add ifdefs to only
build the implementation specific to the platform.
Also add an ifdef around the call to that function to make sure it will
only be called on platforms where it is actually implemented.
Split the function logging the processor information on nix into two
parts. The part logging the number of logical cores is portable and
works on all systems that support POSIX.1 while the other part is
specific to linux.
Add the relevant header file needed on FreeBSD and utilize yet another
ifdef to call pthread_set_name_np as the function name differs from
those on the other platforms.
Add definition on FreeBSD to enable getline in stdio, as it is not
(yet ?) available by default. According to the manpage getline was a
GNU extension but was standardized in POSIX.1-2008.
API Changed (in struct obs_encoder_info):
----------------------------------------
bool (*get_audio_info)(void *data, struct audio_convert_info *info);
bool (*get_video_info)(void *data, struct video_scale_info *info);
To:
----------------------------------------
void (*get_audio_info)(void *data, struct audio_convert_info *info);
void (*get_video_info)(void *data, struct video_scale_info *info);
The encoder video/audio information callbacks no longer need to manually
query the libobs video/audio information, that information is now passed
via the parameter, which the callbacks can modify.
The refactor that reduces boilerplate in the encoder video/audio
information callbacks also removes the need for their return values, so
change the return types to void.
I realized that the get_video_info and get_audio_info encoder callbacks
always have to manually query the libobs audio/video information.
This fixes that problem by passing the libobs video/audio information in
the structures passed to those callbacks so they don't have to query it
each time, reducing needless boilerplate code for encoders.
Allows the ability to hint at encoders what format should be used.
This is particularly useful if libobs is currently operating in planar
4:4:4, but you want to force an encoder used for streaming to convert to
NV12 to prevent streaming issues.
Fixes a crash that could happen if any of the mutexes are used in the
create callback, or before the obs_source_init function is called.
I'm not sure how this function order slipped because it seems fairly
obvious that these mutexes should be created before the create callback.
Had this crash happen to me when creating a WASAPI output source, the
create callback of the WASAPI source creates a thread which outputs
audio, and that thread managed to call obs_source_output_audio before
the obs_source_init function was called, which in turn caused it to try
to use a null mutex.
When caching a new frame, keep a reference to the frame while copying to
ensure that the frame is not potentially destroyed for whatever reason
while that data is being copied.
The obs_source::async_reset_texture variable can cause a data race
between threads to occur because it could be set to true in one thread
then changed back to false in another thread. This could cause the
async texture to not update its size when it's supposed to, which can
cause a crash or corruption when copying data from a frame of a
differing size.
The solution to this is to:
- Delete the async_reset_texture variable, and make the
set_async_texture_size function change the texture size if the
async_width, async_height, or async_format variables differ from the
frame's width/height/format. Those variables are then only ever set
in the libobs graphics thread.
- Make the cache_video function use separate variables from other
functions to detect a change in size (due to the fact that the texture
size should only be resized in the libobs graphics thread). These
variables are async_cache_width, async_cache_height, and
async_cache_format, which are only be set in the thread that calls
obs_source_output_video.
How to replicate the data race:
- On OSX, use window capture on a textedit window, then continually
resize the textedit window.
The bilinear lowres scale effect was using 'output' for a variable,
which is apparently a reserved keyword in GLSL on macs. This slipped
by me due to the fact that this didn't occur with OpenGL on my windows
machine.
The minimum and maximum color range values were not being set by the
video_format_get_parameters function when full range was in use; I
assume it's because the expected min/max values of full range is
{0.0, 0.0, 0.0} and {1.0, 1.0, 1.0}, so I'm just making it so that it
sets those values if using full range.
Way to replicate the issue (windows):
1.) Create a win-dshow device capture source
2.) Select a device and set it to a YUV color format to enable YUV color
conversion
3.) Select "Full Range" in the color range property.
4.) Restart OBS, the device will then start up, but will display green
due to the fact that the min/max range values were never set, and are
left at their default value of 0.
Due to a bad 'if' expression, when a filter that is not last in the
chain is disabled or being bypassed, it ends up still calling the
filter's video processing function unintentionally.
This fix makes sure that it only calls the appropriate render functions
if the next filter target is the source, otherwise it will just call
obs_source_video_render to process the next filter in the chain.
How to replicate the bug:
1. Create two crop filters on the same source
2. Give each crop filter a different distinct value
3. Disable both crop filters
4. The image would still be cropped
The normal scaling methods cannot sample enough pixels to create an
accurate output image when the output size is under half the base size,
so use the bilinear low resolution scaling effect in that case instead
to ensure a more accurate low resolution image.
If you don't need to see what's displayed, then this is particularly
useful for two reasons:
1. It reduces the number of draw/present calls
2. It can prevent issues with certain hardware setups where rendering on
a monitor hooked up to a separate card can experience slowdowns
This reverts commit 99c674e41f.
These commits were originally added to allow multiple user interfaces to
use the same plugins, but I soon realized that multiple user interfaces
can use multiple libobs versions, so each user interface should have its
own set of plugins to manage. Some user interfaces may not wish to use
certain plugins anyway, so this fixes that issue as well.
This reverts commit 92d800cc18, reversing
changes made to 35a4acede0.
These commits were originally added to allow multiple user interfaces to
use the same plugins, but I soon realized that multiple user interfaces
can use multiple libobs versions, so each user interface should have its
own set of plugins to manage. Some user interfaces may not wish to use
certain plugins anyway, so this fixes that issue as well.
Instead of manually setting the blend state to the desired values, use
gs_reset_blend_state to ensure we have the default blend state (which
for color is a typical srcalpha/invsrcalpha alpha blending operation,
then the alpha channels are added together).
This fixes an issue where filtered scenes would look strange due to the
fact that alpha was not being blended properly.
This allows the ability to separate the blend states of color and alpha.
The default blend state has also changed so that alpha is always added
together to ensure that the destination image always gets an alpha value
that is actually usable after the operation (for render targets).
Old default state:
color source: GS_BLEND_SRCALPHA, color dest: GS_BLEND_INVSRCALPHA
alpha source: GS_BLEND_SRCALPHA, alpha dest: GS_BLEND_INVSRCALPHA
New default state:
color source: GS_BLEND_SRCALPHA, color dest: GS_BLEND_INVSRCALPHA
alpha source: GS_BLEND_ONE, alpha dest: GS_BLEND_ONE
This fixes an issue where cache frames would not free at all after
having been allocated with no upper limit on the cached frame size. If
cached frames go unused for a specific period of time, they are
deallocated and removed from the cache.
This is preferable to having an upper cache limit due to the potential
for async delay filtering.
Under certain circumstances the cache could be prone to growing too
large unintentionally. Setting a hard maximum limit should prevent
memory from growing if we suddenly get a lot of frames.
Async frames are only swapping when rendering, or when not visible.
This is a flawed design due to the fact that there are certain
circumstances where the source is neither visible nor currently
rendering.
This is what caused a memory leak when scene items were marked as
invisible, because if a source has an async child source and decides not
to render that source for whatever reason, the child source would not
process the async frames at all, and the cache would just grow.
To fix this, simply moving the async frame cycle to tick fixes the issue
due to the fact that tick is always called regardless of circumstance.
Filtering the video before it's output to the texture means that it
happens after all the processing on the timestamps and such of the video
data. This way, the video filter does not have to worry about what's
currently buffered, and it won't affect timing.
When OBS is shutting down, if for some reason the filter is destroyed
before the parent source is destroyed, it would try to remove itself
from the source, but it would decrement the reference and try to destroy
itself again while already in the process of destroying itself.
So, the solution was simply to make sure that if it's removing itself
from the source that it doesn't decrement its own reference.
The "last" filter that's rendered is technically at filter index 0, so
enumeration needs to be from the last index in the list to the first
index in the list.
When rendering a filter to a texture, the target is empty and unused, so
there's no reason for blending to be on when rendering the filter to a
render target.
obs_source_process_filter tried to do everything in a single function,
but the problem is that effect parameters would not properly be
accounted for due to the way it internally draws, therefore it was
necessary to split the functions in to two, you first call
obs_source_process_filter_begin, then you set your effect parameters,
then you finally call obs_source_process_filter_end. This ensures that
when the filter is drawn, that the effect parameters are set.
When the filter chain finally reaches the source and the last filter in
the chain is set to not render directly (meaning it has to render to
texture), it would not render the source with any effect due to the fact
that it expects a filter to be present.
Adds a source callback function that is used when a filter is removed
from a source. This ensures that any data that might be associated with
that source can be cleaned up if necessary.
There are a few more conditions which need to be checked to ensure
whether we can actually bypass or not; in this particular case we are
using the YUV texture shaders, plus the image can also be flipped, so we
can't really use the bypass optimization in those situations.
These functions are primarily for use with filters, filters need to be
able to get the width/height of a target source without it necessarily
getting the post-filtered dimensions.
Previously I had it set so that it would set the async_width and
async_height variables to 0,0 if a null frame occurs, but the problem
with this is that it was inadvertently cause the frame cache to clear
when it starts receiving textures again because it detects it as a size
change.
So instead of changing async_width and async_height to 0 when a null
frame occurs, change it so that if the source is set to an inactive
state, make obs_source_get_width/_get_height return 0 for their
dimensions instead.
When a frame is processed by a filter, it comes directly from the
source's video frame cache. However, if a filter is using or processing
those frames for whatever reason, there would be no guarantee that the
frames would persist during processing, and frames could eventually be
deallocated unexpected, for example when the resolution or format
changes.
So the solution is to implement simple reference counting for the frames
so that the frames will exist until they have been released by any
source or filter that's using them.
Fix a bug where when a source first starts up its async cache, it
unintentionally resets its cache, which means that the first few frames
would be lost.
This optionally specifies that the int/float value should be displayed
with a slider rather than and up/down control. UIs are not necessarily
required to implement it, it's meant to be more of a hint.
If negative was specified it would not parse the negative sign and thus
would not interpret that number as negative, and would cause
shader/effect parsing to simply fail on the file.
This is particularly important for the filter pipeline in order to
ensure that when the last filter is reached that the original blend
state is properly reset.
When render targets are used, they output to the render target inverted
due to the way that opengl works. This fixes that issue by inverting
the projection matrix so that it renders the image upside down and
inverting the front face from counterclockwise to clockwise.
The wrong function was being used to recurse through the filter chain in
obs_source_process_filter, obs_source_get_[width/height] would get the
post-effect dimensions rather than the pre-effect dimensions.
The frame is definitely meant to be modifiable if needed, so it
shouldn't be const. I originally meant for these frames to be
duplicated, but with the source video cache that's both unnecessary and
would reduce performance.
If the audio data had the same format/samplerate as the obs audio
subsystem, it would fail to simply destroy the resampler and set it to
NULL, and then any audio data going through would use the resampler that
was being used before that, causing audio to become garbage.
This bug only started appearing when I recently changed the libobs
internal audio subsystem format to non-interleaved floating point, which
is a common format, and thus caused this bug to actually occur more
often.