0
0
mirror of https://github.com/obsproject/obs-studio.git synced 2024-09-20 13:08:50 +02:00
obs-studio/libobs/media-io/video-io.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

272 lines
6.6 KiB
C
Raw Normal View History

2013-10-01 04:37:13 +02:00
/******************************************************************************
Copyright (C) 2013 by Hugh Bailey <obs.jim@gmail.com>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or
2013-10-01 04:37:13 +02:00
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
******************************************************************************/
#pragma once
2013-10-01 04:37:13 +02:00
#include "media-io-defs.h"
2013-10-01 04:37:13 +02:00
#ifdef __cplusplus
extern "C" {
#endif
libobs: Redesign/optimize frame encoding handling Previously, the design for the interaction between the encoder thread and the graphics thread was that the encoder thread would signal to the graphics thread when to start drawing each frame. The original idea behind this was to prevent mutually cascading stalls of encoding or graphics rendering (i.e., if rendering took too long, then encoding would have to catch up, then rendering would have to catch up again, and so on, cascading upon each other). The ultimate goal was to prevent encoding from impacting graphics and vise versa. However, eventually it was realized that there were some fundamental flaws with this design. 1. Stray frame duplication. You could not guarantee that a frame would render on time, so sometimes frames would unintentionally be lost if there was any sort of minor hiccup or if the thread took too long to be scheduled I'm guessing. 2. Frame timing in the rendering thread was less accurate. The only place where frame timing was accurate was in the encoder thread, and the graphics thread was at the whim of thread scheduling. On higher end computers it was typically fine, but it was just generally not guaranteed that a frame would be rendered when it was supposed to be rendered. So the solution (originally proposed by r1ch and paibox) is to instead keep the encoding and graphics threads separate as usual, but instead of the encoder thread controlling the graphics thread, the graphics thread now controls the encoder thread. The encoder thread keeps a limited cache of frames, then the graphics thread copies frames in to the cache and increments a semaphore to schedule the encoder thread to encode that data. In the cache, each frame has an encode counter. If the frame cache is full (e.g., the encoder taking too long to return frames), it will not cache a new frame, but instead will just increment the counter on the last frame in the cache to schedule that frame to encode again, ensuring that frames are on time and reducing CPU usage by lowering video complexity. If the graphics thread takes too long to render a frame, then it will add that frame with the count value set to the total amount of frames that were missed (actual legitimately duplicated frames). Because the cache gives many frames of breathing room for the encoder to encode frames, this design helps improve results especially when using encoding presets that have higher complexity and CPU usage, minimizing the risk of needlessly skipped or duplicated frames. I also managed to sneak in what should be a bit of an optimization to reduce copying of frame data, though how much of an optimization it ultimately ends up being is debatable. So to sum it up, this commit increases accuracy of frame timing, completely removes stray frame duplication, gives better results for higher complexity encoding presets, and potentially optimizes the frame pipeline a tiny bit.
2014-12-31 10:53:13 +01:00
struct video_frame;
/* Base video output component. Use this to create a video output track. */
2013-10-01 04:37:13 +02:00
struct video_output;
typedef struct video_output video_t;
2013-10-01 04:37:13 +02:00
enum video_format {
VIDEO_FORMAT_NONE,
/* planar 420 format */
VIDEO_FORMAT_I420, /* three-plane */
VIDEO_FORMAT_NV12, /* two-plane, luma and packed chroma */
/* packed 422 formats */
VIDEO_FORMAT_YVYU,
VIDEO_FORMAT_YUY2, /* YUYV */
VIDEO_FORMAT_UYVY,
/* packed uncompressed formats */
VIDEO_FORMAT_RGBA,
VIDEO_FORMAT_BGRA,
VIDEO_FORMAT_BGRX,
VIDEO_FORMAT_Y800, /* grayscale */
/* planar 4:4:4 */
VIDEO_FORMAT_I444,
/* more packed uncompressed formats */
VIDEO_FORMAT_BGR3,
/* planar 4:2:2 */
VIDEO_FORMAT_I422,
/* planar 4:2:0 with alpha */
VIDEO_FORMAT_I40A,
/* planar 4:2:2 with alpha */
VIDEO_FORMAT_I42A,
/* planar 4:4:4 with alpha */
VIDEO_FORMAT_YUVA,
/* packed 4:4:4 with alpha */
VIDEO_FORMAT_AYUV,
};
enum video_colorspace {
VIDEO_CS_DEFAULT,
VIDEO_CS_601,
VIDEO_CS_709,
VIDEO_CS_SRGB,
};
enum video_range_type {
VIDEO_RANGE_DEFAULT,
VIDEO_RANGE_PARTIAL,
VIDEO_RANGE_FULL
};
struct video_data {
Implement encoder interface (still preliminary) - Implement OBS encoder interface. It was previously incomplete, but now is reaching some level of completion, though probably should still be considered preliminary. I had originally implemented it so that encoders only have a 'reset' function to reset their parameters, but I felt that having both a 'start' and 'stop' function would be useful. Encoders are now assigned to a specific video/audio media output each rather than implicitely assigned to the main obs video/audio contexts. This allows separate encoder contexts that aren't necessarily assigned to the main video/audio context (which is useful for things such as recording specific sources). Will probably have to do this for regular obs outputs as well. When creating an encoder, you must now explicitely state whether that encoder is an audio or video encoder. Audio and video can optionally be automatically converted depending on what the encoder specifies. When something 'attaches' to an encoder, the first attachment starts the encoder, and the encoder automatically attaches to the media output context associated with it. Subsequent attachments won't have the same effect, they will just start receiving the same encoder data when the next keyframe plays (along with SEI if any). When detaching from the encoder, the last detachment will fully stop the encoder and detach the encoder from the media output context associated with the encoder. SEI must actually be exported separately; because new encoder attachments may not always be at the beginning of the stream, the first keyframe they get must have that SEI data in it. If the encoder has SEI data, it needs only add one small function to simply query that SEI data, and then that data will be handled automatically by libobs for all subsequent encoder attachments. - Implement x264 encoder plugin, move x264 files to separate plugin to separate necessary dependencies. - Change video/audio frame output structures to not use const qualifiers to prevent issues with non-const function usage elsewhere. This was an issue when writing the x264 encoder, as the x264 encoder expects non-const frame data. Change stagesurf_map to return a non-const data type to prevent this as well. - Change full range parameter of video scaler to be an enum rather than boolean
2014-03-17 00:21:34 +01:00
uint8_t *data[MAX_AV_PLANES];
uint32_t linesize[MAX_AV_PLANES];
uint64_t timestamp;
2013-10-01 04:37:13 +02:00
};
struct video_output_info {
const char *name;
enum video_format format;
uint32_t fps_num;
uint32_t fps_den;
uint32_t width;
uint32_t height;
libobs: Redesign/optimize frame encoding handling Previously, the design for the interaction between the encoder thread and the graphics thread was that the encoder thread would signal to the graphics thread when to start drawing each frame. The original idea behind this was to prevent mutually cascading stalls of encoding or graphics rendering (i.e., if rendering took too long, then encoding would have to catch up, then rendering would have to catch up again, and so on, cascading upon each other). The ultimate goal was to prevent encoding from impacting graphics and vise versa. However, eventually it was realized that there were some fundamental flaws with this design. 1. Stray frame duplication. You could not guarantee that a frame would render on time, so sometimes frames would unintentionally be lost if there was any sort of minor hiccup or if the thread took too long to be scheduled I'm guessing. 2. Frame timing in the rendering thread was less accurate. The only place where frame timing was accurate was in the encoder thread, and the graphics thread was at the whim of thread scheduling. On higher end computers it was typically fine, but it was just generally not guaranteed that a frame would be rendered when it was supposed to be rendered. So the solution (originally proposed by r1ch and paibox) is to instead keep the encoding and graphics threads separate as usual, but instead of the encoder thread controlling the graphics thread, the graphics thread now controls the encoder thread. The encoder thread keeps a limited cache of frames, then the graphics thread copies frames in to the cache and increments a semaphore to schedule the encoder thread to encode that data. In the cache, each frame has an encode counter. If the frame cache is full (e.g., the encoder taking too long to return frames), it will not cache a new frame, but instead will just increment the counter on the last frame in the cache to schedule that frame to encode again, ensuring that frames are on time and reducing CPU usage by lowering video complexity. If the graphics thread takes too long to render a frame, then it will add that frame with the count value set to the total amount of frames that were missed (actual legitimately duplicated frames). Because the cache gives many frames of breathing room for the encoder to encode frames, this design helps improve results especially when using encoding presets that have higher complexity and CPU usage, minimizing the risk of needlessly skipped or duplicated frames. I also managed to sneak in what should be a bit of an optimization to reduce copying of frame data, though how much of an optimization it ultimately ends up being is debatable. So to sum it up, this commit increases accuracy of frame timing, completely removes stray frame duplication, gives better results for higher complexity encoding presets, and potentially optimizes the frame pipeline a tiny bit.
2014-12-31 10:53:13 +01:00
size_t cache_size;
enum video_colorspace colorspace;
enum video_range_type range;
2013-10-01 04:37:13 +02:00
};
static inline bool format_is_yuv(enum video_format format)
{
switch (format) {
case VIDEO_FORMAT_I420:
case VIDEO_FORMAT_NV12:
case VIDEO_FORMAT_I422:
case VIDEO_FORMAT_YVYU:
case VIDEO_FORMAT_YUY2:
case VIDEO_FORMAT_UYVY:
case VIDEO_FORMAT_I444:
case VIDEO_FORMAT_I40A:
case VIDEO_FORMAT_I42A:
case VIDEO_FORMAT_YUVA:
case VIDEO_FORMAT_AYUV:
return true;
case VIDEO_FORMAT_NONE:
case VIDEO_FORMAT_RGBA:
case VIDEO_FORMAT_BGRA:
case VIDEO_FORMAT_BGRX:
case VIDEO_FORMAT_Y800:
case VIDEO_FORMAT_BGR3:
return false;
}
return false;
}
static inline const char *get_video_format_name(enum video_format format)
{
switch (format) {
case VIDEO_FORMAT_I420:
return "I420";
case VIDEO_FORMAT_NV12:
return "NV12";
case VIDEO_FORMAT_I422:
return "I422";
case VIDEO_FORMAT_YVYU:
return "YVYU";
case VIDEO_FORMAT_YUY2:
return "YUY2";
case VIDEO_FORMAT_UYVY:
return "UYVY";
case VIDEO_FORMAT_RGBA:
return "RGBA";
case VIDEO_FORMAT_BGRA:
return "BGRA";
case VIDEO_FORMAT_BGRX:
return "BGRX";
case VIDEO_FORMAT_I444:
return "I444";
case VIDEO_FORMAT_Y800:
return "Y800";
case VIDEO_FORMAT_BGR3:
return "BGR3";
case VIDEO_FORMAT_I40A:
return "I40A";
case VIDEO_FORMAT_I42A:
return "I42A";
case VIDEO_FORMAT_YUVA:
return "YUVA";
case VIDEO_FORMAT_AYUV:
return "AYUV";
case VIDEO_FORMAT_NONE:;
}
return "None";
}
static inline const char *get_video_colorspace_name(enum video_colorspace cs)
{
switch (cs) {
case VIDEO_CS_DEFAULT:
case VIDEO_CS_709:
return "709";
2020-01-25 19:23:32 +01:00
case VIDEO_CS_SRGB:
return "sRGB";
case VIDEO_CS_601:;
}
return "601";
}
static inline enum video_range_type
resolve_video_range(enum video_format format, enum video_range_type range)
{
if (range == VIDEO_RANGE_DEFAULT) {
range = format_is_yuv(format) ? VIDEO_RANGE_PARTIAL
: VIDEO_RANGE_FULL;
}
return range;
}
static inline const char *get_video_range_name(enum video_format format,
enum video_range_type range)
{
range = resolve_video_range(format, range);
return range == VIDEO_RANGE_FULL ? "Full" : "Partial";
}
enum video_scale_type {
Implement encoder interface (still preliminary) - Implement OBS encoder interface. It was previously incomplete, but now is reaching some level of completion, though probably should still be considered preliminary. I had originally implemented it so that encoders only have a 'reset' function to reset their parameters, but I felt that having both a 'start' and 'stop' function would be useful. Encoders are now assigned to a specific video/audio media output each rather than implicitely assigned to the main obs video/audio contexts. This allows separate encoder contexts that aren't necessarily assigned to the main video/audio context (which is useful for things such as recording specific sources). Will probably have to do this for regular obs outputs as well. When creating an encoder, you must now explicitely state whether that encoder is an audio or video encoder. Audio and video can optionally be automatically converted depending on what the encoder specifies. When something 'attaches' to an encoder, the first attachment starts the encoder, and the encoder automatically attaches to the media output context associated with it. Subsequent attachments won't have the same effect, they will just start receiving the same encoder data when the next keyframe plays (along with SEI if any). When detaching from the encoder, the last detachment will fully stop the encoder and detach the encoder from the media output context associated with the encoder. SEI must actually be exported separately; because new encoder attachments may not always be at the beginning of the stream, the first keyframe they get must have that SEI data in it. If the encoder has SEI data, it needs only add one small function to simply query that SEI data, and then that data will be handled automatically by libobs for all subsequent encoder attachments. - Implement x264 encoder plugin, move x264 files to separate plugin to separate necessary dependencies. - Change video/audio frame output structures to not use const qualifiers to prevent issues with non-const function usage elsewhere. This was an issue when writing the x264 encoder, as the x264 encoder expects non-const frame data. Change stagesurf_map to return a non-const data type to prevent this as well. - Change full range parameter of video scaler to be an enum rather than boolean
2014-03-17 00:21:34 +01:00
VIDEO_SCALE_DEFAULT,
VIDEO_SCALE_POINT,
VIDEO_SCALE_FAST_BILINEAR,
VIDEO_SCALE_BILINEAR,
VIDEO_SCALE_BICUBIC,
};
struct video_scale_info {
enum video_format format;
uint32_t width;
uint32_t height;
Implement encoder interface (still preliminary) - Implement OBS encoder interface. It was previously incomplete, but now is reaching some level of completion, though probably should still be considered preliminary. I had originally implemented it so that encoders only have a 'reset' function to reset their parameters, but I felt that having both a 'start' and 'stop' function would be useful. Encoders are now assigned to a specific video/audio media output each rather than implicitely assigned to the main obs video/audio contexts. This allows separate encoder contexts that aren't necessarily assigned to the main video/audio context (which is useful for things such as recording specific sources). Will probably have to do this for regular obs outputs as well. When creating an encoder, you must now explicitely state whether that encoder is an audio or video encoder. Audio and video can optionally be automatically converted depending on what the encoder specifies. When something 'attaches' to an encoder, the first attachment starts the encoder, and the encoder automatically attaches to the media output context associated with it. Subsequent attachments won't have the same effect, they will just start receiving the same encoder data when the next keyframe plays (along with SEI if any). When detaching from the encoder, the last detachment will fully stop the encoder and detach the encoder from the media output context associated with the encoder. SEI must actually be exported separately; because new encoder attachments may not always be at the beginning of the stream, the first keyframe they get must have that SEI data in it. If the encoder has SEI data, it needs only add one small function to simply query that SEI data, and then that data will be handled automatically by libobs for all subsequent encoder attachments. - Implement x264 encoder plugin, move x264 files to separate plugin to separate necessary dependencies. - Change video/audio frame output structures to not use const qualifiers to prevent issues with non-const function usage elsewhere. This was an issue when writing the x264 encoder, as the x264 encoder expects non-const frame data. Change stagesurf_map to return a non-const data type to prevent this as well. - Change full range parameter of video scaler to be an enum rather than boolean
2014-03-17 00:21:34 +01:00
enum video_range_type range;
enum video_colorspace colorspace;
};
EXPORT enum video_format video_format_from_fourcc(uint32_t fourcc);
EXPORT bool video_format_get_parameters(enum video_colorspace color_space,
enum video_range_type range,
float matrix[16], float min_range[3],
float max_range[3]);
2013-10-01 04:37:13 +02:00
#define VIDEO_OUTPUT_SUCCESS 0
#define VIDEO_OUTPUT_INVALIDPARAM -1
#define VIDEO_OUTPUT_FAIL -2
EXPORT int video_output_open(video_t **video, struct video_output_info *info);
EXPORT void video_output_close(video_t *video);
EXPORT bool
video_output_connect(video_t *video, const struct video_scale_info *conversion,
Implement encoder interface (still preliminary) - Implement OBS encoder interface. It was previously incomplete, but now is reaching some level of completion, though probably should still be considered preliminary. I had originally implemented it so that encoders only have a 'reset' function to reset their parameters, but I felt that having both a 'start' and 'stop' function would be useful. Encoders are now assigned to a specific video/audio media output each rather than implicitely assigned to the main obs video/audio contexts. This allows separate encoder contexts that aren't necessarily assigned to the main video/audio context (which is useful for things such as recording specific sources). Will probably have to do this for regular obs outputs as well. When creating an encoder, you must now explicitely state whether that encoder is an audio or video encoder. Audio and video can optionally be automatically converted depending on what the encoder specifies. When something 'attaches' to an encoder, the first attachment starts the encoder, and the encoder automatically attaches to the media output context associated with it. Subsequent attachments won't have the same effect, they will just start receiving the same encoder data when the next keyframe plays (along with SEI if any). When detaching from the encoder, the last detachment will fully stop the encoder and detach the encoder from the media output context associated with the encoder. SEI must actually be exported separately; because new encoder attachments may not always be at the beginning of the stream, the first keyframe they get must have that SEI data in it. If the encoder has SEI data, it needs only add one small function to simply query that SEI data, and then that data will be handled automatically by libobs for all subsequent encoder attachments. - Implement x264 encoder plugin, move x264 files to separate plugin to separate necessary dependencies. - Change video/audio frame output structures to not use const qualifiers to prevent issues with non-const function usage elsewhere. This was an issue when writing the x264 encoder, as the x264 encoder expects non-const frame data. Change stagesurf_map to return a non-const data type to prevent this as well. - Change full range parameter of video scaler to be an enum rather than boolean
2014-03-17 00:21:34 +01:00
void (*callback)(void *param, struct video_data *frame),
void *param);
EXPORT void video_output_disconnect(video_t *video,
Implement encoder interface (still preliminary) - Implement OBS encoder interface. It was previously incomplete, but now is reaching some level of completion, though probably should still be considered preliminary. I had originally implemented it so that encoders only have a 'reset' function to reset their parameters, but I felt that having both a 'start' and 'stop' function would be useful. Encoders are now assigned to a specific video/audio media output each rather than implicitely assigned to the main obs video/audio contexts. This allows separate encoder contexts that aren't necessarily assigned to the main video/audio context (which is useful for things such as recording specific sources). Will probably have to do this for regular obs outputs as well. When creating an encoder, you must now explicitely state whether that encoder is an audio or video encoder. Audio and video can optionally be automatically converted depending on what the encoder specifies. When something 'attaches' to an encoder, the first attachment starts the encoder, and the encoder automatically attaches to the media output context associated with it. Subsequent attachments won't have the same effect, they will just start receiving the same encoder data when the next keyframe plays (along with SEI if any). When detaching from the encoder, the last detachment will fully stop the encoder and detach the encoder from the media output context associated with the encoder. SEI must actually be exported separately; because new encoder attachments may not always be at the beginning of the stream, the first keyframe they get must have that SEI data in it. If the encoder has SEI data, it needs only add one small function to simply query that SEI data, and then that data will be handled automatically by libobs for all subsequent encoder attachments. - Implement x264 encoder plugin, move x264 files to separate plugin to separate necessary dependencies. - Change video/audio frame output structures to not use const qualifiers to prevent issues with non-const function usage elsewhere. This was an issue when writing the x264 encoder, as the x264 encoder expects non-const frame data. Change stagesurf_map to return a non-const data type to prevent this as well. - Change full range parameter of video scaler to be an enum rather than boolean
2014-03-17 00:21:34 +01:00
void (*callback)(void *param,
struct video_data *frame),
void *param);
EXPORT bool video_output_active(const video_t *video);
EXPORT const struct video_output_info *
video_output_get_info(const video_t *video);
libobs: Redesign/optimize frame encoding handling Previously, the design for the interaction between the encoder thread and the graphics thread was that the encoder thread would signal to the graphics thread when to start drawing each frame. The original idea behind this was to prevent mutually cascading stalls of encoding or graphics rendering (i.e., if rendering took too long, then encoding would have to catch up, then rendering would have to catch up again, and so on, cascading upon each other). The ultimate goal was to prevent encoding from impacting graphics and vise versa. However, eventually it was realized that there were some fundamental flaws with this design. 1. Stray frame duplication. You could not guarantee that a frame would render on time, so sometimes frames would unintentionally be lost if there was any sort of minor hiccup or if the thread took too long to be scheduled I'm guessing. 2. Frame timing in the rendering thread was less accurate. The only place where frame timing was accurate was in the encoder thread, and the graphics thread was at the whim of thread scheduling. On higher end computers it was typically fine, but it was just generally not guaranteed that a frame would be rendered when it was supposed to be rendered. So the solution (originally proposed by r1ch and paibox) is to instead keep the encoding and graphics threads separate as usual, but instead of the encoder thread controlling the graphics thread, the graphics thread now controls the encoder thread. The encoder thread keeps a limited cache of frames, then the graphics thread copies frames in to the cache and increments a semaphore to schedule the encoder thread to encode that data. In the cache, each frame has an encode counter. If the frame cache is full (e.g., the encoder taking too long to return frames), it will not cache a new frame, but instead will just increment the counter on the last frame in the cache to schedule that frame to encode again, ensuring that frames are on time and reducing CPU usage by lowering video complexity. If the graphics thread takes too long to render a frame, then it will add that frame with the count value set to the total amount of frames that were missed (actual legitimately duplicated frames). Because the cache gives many frames of breathing room for the encoder to encode frames, this design helps improve results especially when using encoding presets that have higher complexity and CPU usage, minimizing the risk of needlessly skipped or duplicated frames. I also managed to sneak in what should be a bit of an optimization to reduce copying of frame data, though how much of an optimization it ultimately ends up being is debatable. So to sum it up, this commit increases accuracy of frame timing, completely removes stray frame duplication, gives better results for higher complexity encoding presets, and potentially optimizes the frame pipeline a tiny bit.
2014-12-31 10:53:13 +01:00
EXPORT bool video_output_lock_frame(video_t *video, struct video_frame *frame,
int count, uint64_t timestamp);
EXPORT void video_output_unlock_frame(video_t *video);
EXPORT uint64_t video_output_get_frame_time(const video_t *video);
EXPORT void video_output_stop(video_t *video);
libobs: Redesign/optimize frame encoding handling Previously, the design for the interaction between the encoder thread and the graphics thread was that the encoder thread would signal to the graphics thread when to start drawing each frame. The original idea behind this was to prevent mutually cascading stalls of encoding or graphics rendering (i.e., if rendering took too long, then encoding would have to catch up, then rendering would have to catch up again, and so on, cascading upon each other). The ultimate goal was to prevent encoding from impacting graphics and vise versa. However, eventually it was realized that there were some fundamental flaws with this design. 1. Stray frame duplication. You could not guarantee that a frame would render on time, so sometimes frames would unintentionally be lost if there was any sort of minor hiccup or if the thread took too long to be scheduled I'm guessing. 2. Frame timing in the rendering thread was less accurate. The only place where frame timing was accurate was in the encoder thread, and the graphics thread was at the whim of thread scheduling. On higher end computers it was typically fine, but it was just generally not guaranteed that a frame would be rendered when it was supposed to be rendered. So the solution (originally proposed by r1ch and paibox) is to instead keep the encoding and graphics threads separate as usual, but instead of the encoder thread controlling the graphics thread, the graphics thread now controls the encoder thread. The encoder thread keeps a limited cache of frames, then the graphics thread copies frames in to the cache and increments a semaphore to schedule the encoder thread to encode that data. In the cache, each frame has an encode counter. If the frame cache is full (e.g., the encoder taking too long to return frames), it will not cache a new frame, but instead will just increment the counter on the last frame in the cache to schedule that frame to encode again, ensuring that frames are on time and reducing CPU usage by lowering video complexity. If the graphics thread takes too long to render a frame, then it will add that frame with the count value set to the total amount of frames that were missed (actual legitimately duplicated frames). Because the cache gives many frames of breathing room for the encoder to encode frames, this design helps improve results especially when using encoding presets that have higher complexity and CPU usage, minimizing the risk of needlessly skipped or duplicated frames. I also managed to sneak in what should be a bit of an optimization to reduce copying of frame data, though how much of an optimization it ultimately ends up being is debatable. So to sum it up, this commit increases accuracy of frame timing, completely removes stray frame duplication, gives better results for higher complexity encoding presets, and potentially optimizes the frame pipeline a tiny bit.
2014-12-31 10:53:13 +01:00
EXPORT bool video_output_stopped(video_t *video);
2013-10-01 04:37:13 +02:00
EXPORT enum video_format video_output_get_format(const video_t *video);
EXPORT uint32_t video_output_get_width(const video_t *video);
EXPORT uint32_t video_output_get_height(const video_t *video);
EXPORT double video_output_get_frame_rate(const video_t *video);
EXPORT uint32_t video_output_get_skipped_frames(const video_t *video);
EXPORT uint32_t video_output_get_total_frames(const video_t *video);
extern void video_output_inc_texture_encoders(video_t *video);
extern void video_output_dec_texture_encoders(video_t *video);
extern void video_output_inc_texture_frames(video_t *video);
extern void video_output_inc_texture_skipped_frames(video_t *video);
2013-10-01 04:37:13 +02:00
#ifdef __cplusplus
}
#endif