0
0
mirror of https://github.com/obsproject/obs-studio.git synced 2024-09-20 13:08:50 +02:00
Commit Graph

101 Commits

Author SHA1 Message Date
Palana
4daa5c7aa7 media-io: Fix check before passing pointer to av_freep
Found by clang-3.7 (trunk 236075) via -Wpointer-bool-conversion:

warning: address of array 'rs->output_buffer' will always evaluate
to 'true'
2015-05-11 20:42:53 +02:00
jp9000
1b53215b99 libobs: Add function to get video format name 2015-04-18 00:03:21 -07:00
jp9000
908a165d62 Add planar YUV 4:4:4 format support
Adds the ability to natively output with planar YUV 4:4:4.
2015-04-17 20:16:40 -07:00
jp9000
822d1ec3fc libobs: Add packed yuv to planar 4:4:4 conversion 2015-04-16 22:51:31 -07:00
jp9000
0f346561ff libobs: Refactor pack_lum to pack_shift
Allows the ability to specify the bytes to shift as a parameter.  Useful
for planar 4:4:4 conversion.
2015-04-16 22:48:54 -07:00
jp9000
4e54c89f9c libobs: Fix full range min/max output
The minimum and maximum color range values were not being set by the
video_format_get_parameters function when full range was in use; I
assume it's because the expected min/max values of full range is
{0.0, 0.0, 0.0} and {1.0, 1.0, 1.0}, so I'm just making it so that it
sets those values if using full range.

Way to replicate the issue (windows):
1.) Create a win-dshow device capture source
2.) Select a device and set it to a YUV color format to enable YUV color
conversion
3.) Select "Full Range" in the color range property.
4.) Restart OBS, the device will then start up, but will display green
due to the fact that the min/max range values were never set, and are
left at their default value of 0.
2015-04-10 07:27:26 -07:00
jp9000
84e1f47ced (API Change) Add support for multiple audio mixers
API changed:
--------------------------

void obs_output_set_audio_encoder(
		obs_output_t *output,
		obs_encoder_t *encoder);

obs_encoder_t *obs_output_get_audio_encoder(
		const obs_output_t *output);

obs_encoder_t *obs_audio_encoder_create(
		const char *id,
		const char *name,
		obs_data_t *settings);

Changed to:
--------------------------

/* 'idx' specifies the track index of the output */
void obs_output_set_audio_encoder(
		obs_output_t *output,
		obs_encoder_t *encoder,
		size_t idx);

/* 'idx' specifies the track index of the output */
obs_encoder_t *obs_output_get_audio_encoder(
		const obs_output_t *output,
		size_t idx);

/* 'mixer_idx' specifies the mixer index to capture audio from */
obs_encoder_t *obs_audio_encoder_create(
		const char *id,
		const char *name,
		obs_data_t *settings,
		size_t mixer_idx);

Overview
--------------------------
This feature allows multiple audio mixers to be used at a time.  This
capability was able to be added with surprisingly very little extra
overhead.  Audio will not be mixed unless it's assigned to a specific
mixer, and mixers will not mix unless they have an active mix
connection.

Mostly this will be useful for being able to separate out specific audio
for recording versus streaming, but will also be useful for certain
streaming services that support multiple audio streams via RTMP.

I didn't want to use a variable amount of mixers due to the desire to
reduce heap allocations, so currently I set the limit to 4 simultaneous
mixers; this number can be increased later if needed, but honestly I
feel like it's just the right number to use.

Sources:

Sources can now specify which audio mixers their audio is mixed to; this
can be a single mixer or multiple mixers at a time.  The
obs_source_set_audio_mixers function sets the audio mixer which an audio
source applies to.  For example, 0xF would mean that the source applies
to all four mixers.

Audio Encoders:

Audio encoders now must specify which specific audio mixer they use when
they encode audio data.

Outputs:

Outputs that use encoders can now support multiple audio tracks at once
if they have the OBS_OUTPUT_MULTI_TRACK capability flag set.  This is
mostly only useful for certain types of RTMP transmissions, though may
be useful for file formats that support multiple audio tracks as well
later on.
2015-02-04 16:51:29 -08:00
jp9000
24eaf77963 libobs: Fix typo, 'audio' instead of 'video'
For some extremely inexplicable reason, I somehow managed to use 'video'
for the audio data instead of 'audio'.
2015-02-04 15:40:21 -08:00
jp9000
f93b2fe794 Set various thread names
Helps identify which threads are which when debugging
2015-01-03 02:37:20 -08:00
jp9000
11106c2fce libobs: Redesign/optimize frame encoding handling
Previously, the design for the interaction between the encoder thread
and the graphics thread was that the encoder thread would signal to the
graphics thread when to start drawing each frame.  The original idea
behind this was to prevent mutually cascading stalls of encoding or
graphics rendering (i.e., if rendering took too long, then encoding
would have to catch up, then rendering would have to catch up again, and
so on, cascading upon each other).  The ultimate goal was to prevent
encoding from impacting graphics and vise versa.

However, eventually it was realized that there were some fundamental
flaws with this design.

1. Stray frame duplication.  You could not guarantee that a frame would
   render on time, so sometimes frames would unintentionally be lost if
   there was any sort of minor hiccup or if the thread took too long to
   be scheduled I'm guessing.

2. Frame timing in the rendering thread was less accurate.  The only
   place where frame timing was accurate was in the encoder thread, and
   the graphics thread was at the whim of thread scheduling.  On higher
   end computers it was typically fine, but it was just generally not
   guaranteed that a frame would be rendered when it was supposed to be
   rendered.

So the solution (originally proposed by r1ch and paibox) is to instead
keep the encoding and graphics threads separate as usual, but instead of
the encoder thread controlling the graphics thread, the graphics thread
now controls the encoder thread.  The encoder thread keeps a limited
cache of frames, then the graphics thread copies frames in to the cache
and increments a semaphore to schedule the encoder thread to encode that
data.

In the cache, each frame has an encode counter.  If the frame cache is
full (e.g., the encoder taking too long to return frames), it will not
cache a new frame, but instead will just increment the counter on the
last frame in the cache to schedule that frame to encode again, ensuring
that frames are on time and reducing CPU usage by lowering video
complexity.  If the graphics thread takes too long to render a frame,
then it will add that frame with the count value set to the total amount
of frames that were missed (actual legitimately duplicated frames).

Because the cache gives many frames of breathing room for the encoder to
encode frames, this design helps improve results especially when using
encoding presets that have higher complexity and CPU usage, minimizing
the risk of needlessly skipped or duplicated frames.

I also managed to sneak in what should be a bit of an optimization to
reduce copying of frame data, though how much of an optimization it
ultimately ends up being is debatable.

So to sum it up, this commit increases accuracy of frame timing,
completely removes stray frame duplication, gives better results for
higher complexity encoding presets, and potentially optimizes the frame
pipeline a tiny bit.
2014-12-31 04:03:47 -08:00
jp9000
8e1549820b libobs/media-io: Add frame copying function 2014-12-31 04:03:45 -08:00
jp9000
5d0551eb27 libobs/media-io: Add #pragma once to video-frame.h 2014-12-31 04:03:44 -08:00
jp9000
c52406c178 libobs/media-io: Fix recursive lock in video
In certain cases the video thread could end up trying to lock itself
recursively.  This just allows the mutexes to safely be locked
recursively.
2014-12-21 10:14:19 -08:00
jp9000
59c4731aa6 libobs/media-io: Add colorspace to video info
This is useful for keeping track of what the current colorspace/range is
if a YUV format is being used.
2014-12-11 19:47:51 -08:00
jp9000
a07ebf44e6 media-io: Fix repeating video timestamps
In video-io.c, video frames could skip, but what would happen is the
frame's timestamp would repeat for the next frame, giving the next frame
a non-monotonic timestamp, and then jump.  This could mess up syncing
slightly when the frame is finally given to an outputs.
2014-10-22 20:32:49 -07:00
jp9000
f6d635e586 Change default audio smoothing threshold to 50ms
70 milliseconds is a bit too high for the default audio timestamp
smoothing threshold.  The full range of error thus becomes 140
milliseconds, which is a bit more than necessary to worry about.  For
the time being, I feel it may be worth it to try 50 milliseconds.
2014-10-22 20:32:46 -07:00
Palana
4247a7b81e Add media remuxer to media-io 2014-10-12 06:27:33 +02:00
jp9000
e78c54e8e5 libobs/media-io: Add more audio debug output 2014-10-06 18:49:53 -04:00
jp9000
41fad2d1a4 (API Change) Use const params where applicable
This Fixes a minor flaw with the API where data had to always be mutable
to be usable by the API.

Functions that do not modify the fundamental underlying data of a
structure should be marked as constant, both for safety and to signify
that the parameter is input only and will not be modified by the
function using it.
2014-09-26 17:23:07 -07:00
GoaLitiuM
10f5d7f3aa Fixed NULL pointer dereferencing in linked lists 2014-09-27 01:35:36 +03:00
jp9000
c9df41c1e2 (API Change) Remove pointers from all typedefs
Typedef pointers are unsafe.  If you do:
typedef struct bla *bla_t;
then you cannot use it as a constant, such as: const bla_t, because
that constant will be to the pointer itself rather than to the
underlying data.  I admit this was a fundamental mistake that must
be corrected.

All typedefs that were pointer types will now have their pointers
removed from the type itself, and the pointers will be used when they
are actually used as variables/parameters/returns instead.

This does not break ABI though, which is pretty nice.
2014-09-25 21:48:11 -07:00
jp9000
dc43438057 Prevent audio too far from expected timing
Audio that goes below the minimum expecting timing (current time -
buffering time) is automatically removed.  However, delayed audio is not
removed regardless of its delay.  This puts a hard cap of 6 seconds from
current time that the maximum delay audio can have.  This will also
prevent the circular buffer from dynamically growing too large.
2014-09-04 14:01:40 -07:00
jp9000
5ac364ff9a Perform timestamp smoothing in media-io as well
Doing timestamp smoothing in obs-source.c is good because timestamps can
typically operate on a different timebase, however, obs-source.c can
also change that time base dynamically (such as with async video and
unexpected timestamp jumps), so in order to ensure that audio is
seamless in the output as well, perform timestamp smoothing in
audio-io.c as well just as an extra precautionary measure.
2014-08-30 11:52:08 -07:00
jp9000
355d5dab6e Move timestamp smooth threshold to media-io 2014-08-30 11:51:37 -07:00
jp9000
f36f82b036 video-io: Add function to just get video format 2014-08-10 16:39:36 -07:00
jp9000
72fbdc7eaa Fix bug with automatic video scaling
This is sort of hard to explain:  the scale_video_output function was
overwriting the current frame.  If scaling was disabled, it would do
nothing, and return success, and all would be well.  If it was enabled,
it would then call the scaler, and then replace the contents of the
'data' function parameter with the scaled frame data.  The problem with
this is that I was passing video_output::cur_frame directly, which
overwrites its previous value with the scaled frame data.  Then if
cur_frame was not updated on time, it would end up trying to scale the
previously scaled image, if that makes sense.  it would call the video
scaler with the same from for both the source and destination.

So the simple fix was to simply use a local variable and pass that in as
a parameter to prevent this bug from occurring.
2014-08-10 12:49:44 -07:00
jp9000
42a0925ce1 (API Change) media-io: Improve naming consistency
Renamed:                        To:
-----------------------------------------------------------
audio_output_blocksize          audio_output_get_block_size
audio_output_planes             audio_output_get_planes
audio_output_channels           audio_output_get_channels
audio_output_samplerate         audio_output_get_sample_rate
audio_output_getinfo            audio_output_get_info
audio_output_createline         audio_output_create_line
video_output_getinfo            video_output_get_info
video_gettime                   video_output_get_time
video_getframetime              video_output_get_frame_time
video_output_width              video_output_get_width
video_output_height             video_output_get_height
video_output_framerate          video_output_get_frame_rate
video_output_num_skipped_frames video_output_get_skipped_frames
video_output_total_frames       video_output_get_total_frames
2014-08-09 11:57:37 -07:00
jp9000
b203f36130 Fix automatic scaling bug
The bug here is that when conversion is active, the source video frame
is initialized with the destination height/width/format instead of the
source height/width/format.
2014-08-01 09:37:44 -07:00
jp9000
289137d5f9 media-io: Add function for total video frames 2014-07-27 13:26:51 -07:00
Palana
bd2b4c91b6 libobs/media-io: Handle VIDEO_CS_DEFAULT in video_format_get_parameters 2014-07-16 22:39:00 +02:00
jp9000
c7bb73fe07 Implement high encoder CPU usage handling
This implements the 'frame skipping' mechanism to forcibly cause frames
to be duplicated in order to reduce encoder complexity so the encoder
can catch up to the video, otherwise it will continue to be
progressively behind and will cause a desync of the video.

Typically, if a user gets this issue, they should turn down their
settings.  For the love of god do not tell them that 'frames are
skipping', just tell them that CPU usage is high, and that they should
consider turning down their settings.
2014-07-01 11:01:22 -07:00
Danni
a91174e2b2 Slight modification of mixing function 2014-06-08 11:48:38 -05:00
jp9000
42a0411f1c libobs: Fix switch warning 2014-06-07 06:04:13 -07:00
jp9000
4ccf928ea1 libobs/media-io: Remove obsolete mixing functions
Also, Remove the volume level processing from audio-io.c, it was moved
to obs_source instead.
2014-06-03 04:50:15 -07:00
Danni
9c3fb4b8dc Added simple volume meter. Updated per comments Pull Req #90 2014-05-24 16:42:54 -07:00
Danni
90d9a5204f Updated per comments pull #90. 2014-05-24 16:24:48 -07:00
Danni
bc542a3e75 Added simple volume meter for reference of input levels. 2014-05-20 09:26:18 -05:00
BtbN
e958bc16e4 Move static to front of declaration, as required by c99 2014-05-10 03:40:05 +02:00
BtbN
9d88c10ca4 Fix multi-char constant warnings 2014-05-10 02:06:59 +02:00
fryshorts
e8bf01acfc Fix destructor video outputs.
This fixes a crash on Linux that supposedly occurs when pthread_join
is called multiple times.
2014-04-26 01:39:43 +02:00
jp9000
633383bdaf libobs/media-io/video-matrices.c: Fix VC warnings 2014-04-24 21:10:39 -07:00
Palana
9f43e6c257 Add yuv_to_rgb matrices and video_format_get_parameters utility 2014-04-24 16:47:06 +02:00
Palana
b77b9feb61 Add helper to convert from FOURCC to VIDEO_FORMAT 2014-04-24 16:46:54 +02:00
Palana
3990c18aac Add NULL checks and assertions to fix clang static analysis problems
Also remove an unused variable from obs-encoder.c (via clang static
analysis)
2014-04-14 23:02:53 +02:00
jp9000
92522d1886 Implement RTMP module (still needs drop code)
- Implement the RTMP output module.  This time around, we just use a
   simple FLV muxer, then just write to the stream with RTMP_Write.
   Easy and effective.

 - Fix the FLV muxer, the muxer now outputs proper FLV packets.

 - Output API:
   * When using encoders, automatically interleave encoded packets
     before sending it to the output.

   * Pair encoders and have them automatically wait for the other to
     start to ensure sync.

   * Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
     because it was a bit confusing, and doing this makes a lot more
     sense for outputs that need to stop suddenly (disconnections/etc).

 - Encoder API:
   * Remove some unnecessary encoder functions from the actual API and
     make them internal.  Most of the encoder functions are handled
     automatically by outputs anyway, so there's no real need to expose
     them and end up inadvertently confusing plugin writers.

   * Have audio encoders wait for the video encoder to get a frame, then
     start at the exact data point that the first video frame starts to
     ensure the most accrate sync of video/audio possible.

   * Add a required 'frame_size' callback for audio encoders that
     returns the expected number of frames desired to encode with.  This
     way, the libobs encoder API can handle the circular buffering
     internally automatically for the encoder modules, so encoder
     writers don't have to do it themselves.

 - Fix a few bugs in the serializer interface.  It was passing the wrong
   variable for the data in a few cases.

 - If a source has video, make obs_source_update defer the actual update
   callback until the tick function is called to prevent threading
   issues.
2014-04-07 22:00:10 -07:00
jp9000
0cf9e0cfdd Add preliminary FLV/RTMP output (incomplete)
- obs-outputs module:  Add preliminary code to send out data, and add
   an FLV muxer.  This time we don't really need to build the packets
   ourselves, we can just use the FLV muxer and send it directly to
   RTMP_Write and it should automatically parse the entire stream for us
   without us having to do much manual code at all.  We'll see how it
   goes.

 - libobs:  Add AVC NAL packet parsing code

 - libobs/media-io:  Add quick helper functions for audio/video to get
   the width/height/fps/samplerate/etc rather than having to query the
   info structures each time.

 - libobs (obs-output.c):  Change 'connect' signal to 'start' and 'stop'
   signals.  'start' now specifies an error code rather than whether it
   simply failed, that way the client can actually know *why* a failure
   occurred.  Added those error codes to obs-defs.h.

 - libobs:  Add a few functions to duplicate/free encoder packets
2014-04-01 11:55:18 -07:00
jp9000
fd37d9e9a8 Implement encoder interface (still preliminary)
- Implement OBS encoder interface.  It was previously incomplete, but
   now is reaching some level of completion, though probably should
   still be considered preliminary.

   I had originally implemented it so that encoders only have a 'reset'
   function to reset their parameters, but I felt that having both a
   'start' and 'stop' function would be useful.

   Encoders are now assigned to a specific video/audio media output each
   rather than implicitely assigned to the main obs video/audio
   contexts.  This allows separate encoder contexts that aren't
   necessarily assigned to the main video/audio context (which is useful
   for things such as recording specific sources).  Will probably have
   to do this for regular obs outputs as well.

   When creating an encoder, you must now explicitely state whether that
   encoder is an audio or video encoder.

   Audio and video can optionally be automatically converted depending
   on what the encoder specifies.

   When something 'attaches' to an encoder, the first attachment starts
   the encoder, and the encoder automatically attaches to the media
   output context associated with it.  Subsequent attachments won't have
   the same effect, they will just start receiving the same encoder data
   when the next keyframe plays (along with SEI if any).  When detaching
   from the encoder, the last detachment will fully stop the encoder and
   detach the encoder from the media output context associated with the
   encoder.

   SEI must actually be exported separately; because new encoder
   attachments may not always be at the beginning of the stream, the
   first keyframe they get must have that SEI data in it.  If the
   encoder has SEI data, it needs only add one small function to simply
   query that SEI data, and then that data will be handled automatically
   by libobs for all subsequent encoder attachments.

 - Implement x264 encoder plugin, move x264 files to separate plugin to
   separate necessary dependencies.

 - Change video/audio frame output structures to not use const
   qualifiers to prevent issues with non-const function usage elsewhere.
   This was an issue when writing the x264 encoder, as the x264 encoder
   expects non-const frame data.

   Change stagesurf_map to return a non-const data type to prevent this
   as well.

 - Change full range parameter of video scaler to be an enum rather than
   boolean
2014-03-16 16:21:34 -07:00
jp9000
585fd8f969 Fix audio streaming and mac semaphores
...The reason why audio didn't work was because I overwrote the bitrate
values.

As for semaphores, mac doesn't support unnamed semaphores without using
mach semaphores.  So, I just implemented a semaphore wrapper for each
OS.
2014-03-10 19:04:00 -07:00
jp9000
b2202c4843 UI: Swap audio slots
Had the audio restart slot connected to things that didn't require a
restart
2014-03-07 22:34:49 -07:00
jp9000
a9f5959b3c Fix an error and a few warnings
The strings didn't have ending double quotes.  No clue why this didn't
fail in GCC and VC.  Well, VC is horrible but I expected better out of
GCC.
2014-03-07 17:19:26 -07:00