0
0
mirror of https://github.com/obsproject/obs-studio.git synced 2024-09-20 13:08:50 +02:00
Commit Graph

92 Commits

Author SHA1 Message Date
jp9000
4f2a731acf Implement reconnecting
The core itself now provides reconnection options (enabled by default, 2
second timeout between reconnects, 20 retries max until actual
disconnection occurs).  This will make things easier for both module
developers and UI developers.

Reconnecting treats the stream as though it were still active, and
signals are sent when reconnecting and upon successful reconnection.

Need to implement user interface information for reconnections.
2014-07-02 20:39:39 -07:00
jp9000
1abf91577e Add module callbacks for loading locale data
The module callback obs_module_set_locale will be called after loading
the module, and any time the locale is manually changed via core API.

When this function is called, the module is expected to load new text
lookup values for all the text it uses based upon the current locale.
2014-06-25 12:37:06 -07:00
jp9000
899f053034 Add API for setting/getting current locale
This API is used to set the current locale for libobs, which it will set
for all modules when a module is loaded or specifically when the locale
is manually changed.
2014-06-25 12:36:17 -07:00
jp9000
9b23914c37 libobs: Add 'initialize' callback to services
The 'initialize' callback is used before the encoders/output start up so
it can adjust encoder settings to required values if needed.

Also added the function 'obs_encoder_active' that returns true or false
depending on whether that encoder is active or not.
2014-06-16 21:37:53 -07:00
jp9000
65f84cf356 Add solid_effect variable to obs_core_video 2014-06-14 23:59:39 -07:00
Jim
4eb6267372 Merge pull request #90 from antihax/master
Added simple volume meter for reference of input levels.
2014-06-03 05:15:18 -07:00
jp9000
5cd64ce7f2 libobs: Add level/magnitude/peak volume levels
This replaces the older code which simply queried the max volume level
value for any given audio.

I'm still not 100% sure on if this is how I want to approach the
problem, particularly, whether this should be done in obs_source or in
audio_line, but it can always be moved later if needed.

This uses the calculations by the awesome Bill Hamilton that OBS1 used
for its volume levels.  It calculates the current max (level),
magnitude, and current peak.  This data then can be used to create
awesome volume meter controls later on.

NOTE: Will probably need optimization, does one float at a time right
now.

Also, change some of the naming conventions.  I actually need to change
a lot of the naming conventions in general so that all words are
separated by underscores.  Kind of a bad practice there on my part.
2014-06-03 04:48:20 -07:00
jp9000
9595a78e8a libobs: Add planar 4:2:0 async texture support
This uses the reverse planar YUV 4:2:0 conversion shader to output a YUV
texture without having to convert it via CPU.  Again, this will reduce
video upload bandwidth usage to 37.5% of the original rate.  I suspect
this will be particularly useful for when an FFmpeg or libav input
plugin for playing videos is made.

NOTE: There's an issue with certain texture sizes right now I haven't
been able to identify, if the full size of texture data divided by the
base texture width is an uneven number, the V chroma plane seems like it
can potentially shift, though I only had this happen with 160x90
resolution C920.  Almost all resolutions tend to be even.  Needs further
testing with more devices that support planar YUV 4:2:0 output.
2014-05-30 02:23:36 -07:00
jp9000
0b5ba66b33 async vid.: Fix potential endless buffering issue
If a source with async video wasn't currently active, it would endlessly
buffer the video data, which would cause memory to grow endlessly until
available memory was extinguished.

This really needs to be replaced with a proper caching mechanism at some
point.
2014-04-28 20:38:15 -07:00
jp9000
6a59aef372 Improve output packet interleaving
The 'wait' constant was a terrible means of trying to ensure that the
packets were interleaved.  Instead, calculate the current highest
timestamps of each encoder that's present in the interleaved buffer, and
use that as a means of detecting whether the current packet should be
sent off.  This will guarantee sorting without relying on some arbirary
constant that 'assumes' that it'll be interleaved.  It also reduces
buffering any more than what is needed to interleave.
2014-04-26 23:29:40 -07:00
Palana
12cc06b560 Add GPU based frame decompression for async video sources 2014-04-24 22:03:32 +02:00
Palana
065379bffa Add limited range rendering for rendering YUV non-full range sources 2014-04-24 21:45:54 +02:00
jp9000
8830c4102f obs-studio UI: Implement stream settings UI
- Updated the services API so that it links up with an output and
   the output gets data from that service rather than via settings.
   This allows the service context to have control over how an output is
   used, and makes it so that the URL/key/etc isn't necessarily some
   static setting.

   Also, if the service is attached to an output, it will stick around
   until the output is destroyed.

 - The settings interface has been updated so that it can allow the
   usage of service plugins.  What this means is that now you can create
   a service plugin that can control aspects of the stream, and it
   allows each service to create their own user interface if they create
   a service plugin module.

 - Testing out saving of current service information.  Saves/loads from
   JSON in to obs_data_t, seems to be working quite nicely, and the
   service object information is saved/preserved on exit, and loaded
   again on startup.

 - I agonized over the settings user interface for days, and eventually
   I just decided that the only way that users weren't going to be
   fumbling over options was to split up the settings in to simple/basic
   output, pre-configured, and then advanced for advanced use (such as
   multiple outputs or services, which I'll implement later).

   This was particularly painful to really design right, I wanted more
   features and wanted to include everything in one interface but
   ultimately just realized from experience that users are just not
   technically knowledgable about it and will end up fumbling with the
   settings rather than getting things done.

   Basically, what this means is that casual users only have to enter in
   about 3 things to configure their stream:  Stream key, audio bitrate,
   and video bitrate.  I am really happy with this interface for those
   types of users, but it definitely won't be sufficient for advanced
   usage or for custom outputs, so that stuff will have to be separated.

 - Improved the JSON usage for the 'common streaming services' context,
   I realized that JSON arrays are there to ensure sorting, while
   forgetting that general items are optimized for hashing.  So
   basically I'm just using arrays now to sort items in it.
2014-04-24 02:19:03 -07:00
jp9000
4a6d19f206 libobs: Add services API, reduce repeated code
Add API for streaming services.  The services API simplifies the
creation of custom service features and user interface.

Custom streaming services later on will be able to do things such as:

 - Be able to use service-specific APIs via modules, allowing a more
   direct means of communicating with the service and requesting or
   setting service-specific information

 - Get URL/stream key via other means of authentication such as OAuth,
   or be able to build custom URLs for services that require that sort
   of thing.

 - Query information (such as viewer count, chat, follower
   notifications, and other information)

 - Set channel information (such as current game, current channel title,
   activating commercials)

Also, I reduce some repeated code that was used for all libobs objects.
This includes the name of the object, the private data, settings, as
well as the signal and procedure handlers.

I also switched to using linked lists for the global object lists,
rather than using an array of pointers (you could say it was..
pointless.)  ..Anyway, the linked list info is also stored in the shared
context data structure.
2014-04-19 20:38:53 -07:00
jp9000
2451b80ef6 Fix drawing bug with async video sources
Before, async video sources would flicker because they were only being
drawn when they were updated.  So when updated, they'd draw that frame,
then it would stop drawing it until it updated again.  This fixes that
issue and they should now draw properly.

Also, fix a few other minor bugs and issues relating to async video,
and make it so that non-async video filters can be properly applied to
them.

For the purposes of testing, change the 'test-random' source to an async
video source that updates every quarter of a second with a new random
face.

Also fix a bug where non-async video sources wouldn't have filter
effects applied properly.
2014-04-13 02:22:28 -07:00
jp9000
519c4f4118 Fix issue when using multiple video encoders
- Fix an issue that could occur when using more than one video encoder.
   Audio/video would not sync up correctly because they were expected to
   be paired with a particular encoder.  This simply adds a little
   helper variable to encoder packets that specifies the system time in
   microseconds.  We then use that system time to sync

 - Fix an issue with x264 with fractional FPS rates (29.97 and 59.94
   particularly) where it would create ridiculously large stream
   outputs.  The problem was that you shouldn't set the timebase_*
   variables in the x264 params manually, let x264 handle the default
   values for it and leave them at 0.

 - Make x264 use CFR output, because there's no reason to ever use VFR
   in this case.
2014-04-10 11:59:42 -07:00
jp9000
92522d1886 Implement RTMP module (still needs drop code)
- Implement the RTMP output module.  This time around, we just use a
   simple FLV muxer, then just write to the stream with RTMP_Write.
   Easy and effective.

 - Fix the FLV muxer, the muxer now outputs proper FLV packets.

 - Output API:
   * When using encoders, automatically interleave encoded packets
     before sending it to the output.

   * Pair encoders and have them automatically wait for the other to
     start to ensure sync.

   * Change 'obs_output_signal_start_fail' to 'obs_output_signal_stop'
     because it was a bit confusing, and doing this makes a lot more
     sense for outputs that need to stop suddenly (disconnections/etc).

 - Encoder API:
   * Remove some unnecessary encoder functions from the actual API and
     make them internal.  Most of the encoder functions are handled
     automatically by outputs anyway, so there's no real need to expose
     them and end up inadvertently confusing plugin writers.

   * Have audio encoders wait for the video encoder to get a frame, then
     start at the exact data point that the first video frame starts to
     ensure the most accrate sync of video/audio possible.

   * Add a required 'frame_size' callback for audio encoders that
     returns the expected number of frames desired to encode with.  This
     way, the libobs encoder API can handle the circular buffering
     internally automatically for the encoder modules, so encoder
     writers don't have to do it themselves.

 - Fix a few bugs in the serializer interface.  It was passing the wrong
   variable for the data in a few cases.

 - If a source has video, make obs_source_update defer the actual update
   callback until the tick function is called to prevent threading
   issues.
2014-04-07 22:00:10 -07:00
jp9000
8c74db9ffc Add packet interleaving and improve encoder API
- Add interleaving of video/audio packets for outputs that are encoded
   and expect both video and audio data, sorting the packets and sending
   them to the output when both video and audio is received.

 - Combine create and initialize callbacks for the encoder API callback
   interface.
2014-04-04 23:21:19 -07:00
jp9000
1bca7e0a3e Improve properties API
Improve the properties API so that it can actually respond somewhat to
user input.  Maybe later this might be further improved or replaced with
something script-based.

When creating a property, you can now add a callback to that property
that notifies when the property has been changed in the user interface.
Return true if you want the properties to be refreshed, or false if not.
Though now that I think about it I doubt there would ever be a case
where you would have this callback and *not* refresh the properties.

Regardless, this allows functions to change the values of properties or
settings, or enable/disable/hide other property controls from view
dynamically.
2014-04-04 00:30:37 -07:00
jp9000
6da26a3a1c Implement encoder usage with outputs
- Make it so that encoders can be assigned to outputs.  If an encoder
   is destroyed, it will automatically remove itself from that output.
   I specifically didn't want to do reference counting because it leaves
   too much potential for unchecked references and it just felt like it
   would be more trouble than it's worth.

 - Add a 'flags' value to the output definition structure.  This lets
   the output specify if it uses video/audio, and whether the output is
   meant to be used with OBS encoders or not.

 - Remove boilerplate code for outputs.  This makes it easier to program
   outputs.  The boilerplate code involved before was mostly just
   involving connecting to the audio/video data streams directly in each
   output plugin.

   Instead of doing that, simply add plugin callback functions for
   receiving video/audio (either encoded or non-encoded, whichever it's
   set to use), and then call obs_output_begin_data_capture and
   obs_output_end_data_capture to automatically handle setting up
   connections to raw or encoded video/audio streams for the plugin.

 - Remove 'active' function from output callbacks, as it's no longer
   really needed now that the libobs output context automatically knows
   when the output is active or not.

 - Make it so that an encoder cannot be destroyed until all data
   connections to the encoder have been removed.

 - Change the 'start' and 'stop' functions in the encoder interface to
   just an 'initialize' callback, which initializes the encoder.

 - Make it so that the encoder must be initialized first before the data
   stream can be started.  The reason why initialization was separated
   from starting the encoder stream was because we need to be able to
   check that the settings used with the encoder *can* be used first.

   This problem was especially annoying if you had both video/audio
   encoding.  Before, you'd have to check the return value from
   obs_encoder_start, and if that second encoder fails, then you
   basically had to stop the first encoder again, making for
   unnecessary boilerplate code whenever starting up two encoders.
2014-03-27 21:50:15 -07:00
jp9000
154e0c59e1 Use atomic functions where appropriate
Also, rename atomic functions to be consistent with the rest of the
platform/threading functions, and move atomic functions to threading*
files rather than platform* files
2014-03-16 18:26:46 -07:00
jp9000
fd37d9e9a8 Implement encoder interface (still preliminary)
- Implement OBS encoder interface.  It was previously incomplete, but
   now is reaching some level of completion, though probably should
   still be considered preliminary.

   I had originally implemented it so that encoders only have a 'reset'
   function to reset their parameters, but I felt that having both a
   'start' and 'stop' function would be useful.

   Encoders are now assigned to a specific video/audio media output each
   rather than implicitely assigned to the main obs video/audio
   contexts.  This allows separate encoder contexts that aren't
   necessarily assigned to the main video/audio context (which is useful
   for things such as recording specific sources).  Will probably have
   to do this for regular obs outputs as well.

   When creating an encoder, you must now explicitely state whether that
   encoder is an audio or video encoder.

   Audio and video can optionally be automatically converted depending
   on what the encoder specifies.

   When something 'attaches' to an encoder, the first attachment starts
   the encoder, and the encoder automatically attaches to the media
   output context associated with it.  Subsequent attachments won't have
   the same effect, they will just start receiving the same encoder data
   when the next keyframe plays (along with SEI if any).  When detaching
   from the encoder, the last detachment will fully stop the encoder and
   detach the encoder from the media output context associated with the
   encoder.

   SEI must actually be exported separately; because new encoder
   attachments may not always be at the beginning of the stream, the
   first keyframe they get must have that SEI data in it.  If the
   encoder has SEI data, it needs only add one small function to simply
   query that SEI data, and then that data will be handled automatically
   by libobs for all subsequent encoder attachments.

 - Implement x264 encoder plugin, move x264 files to separate plugin to
   separate necessary dependencies.

 - Change video/audio frame output structures to not use const
   qualifiers to prevent issues with non-const function usage elsewhere.
   This was an issue when writing the x264 encoder, as the x264 encoder
   expects non-const frame data.

   Change stagesurf_map to return a non-const data type to prevent this
   as well.

 - Change full range parameter of video scaler to be an enum rather than
   boolean
2014-03-16 16:21:34 -07:00
jp9000
5288467aeb Ensure names are valid
Ensure that a source has a valid name.  Duplicates aren't a big deal
internally, but sources without a name are probably something that
should be avoided.  Made is so that if a source is programmatically
created without a name, it's assigned an index based name.

In the main basic-mode window, made it check to make sure the name was
valid as well.
2014-03-10 13:39:51 -07:00
jp9000
02a07ea0a0 Add preliminary streaming code for testing
- Add some temporary streaming code using FFmpeg.  FFmpeg itself is not
   very ideal for streaming; lack of direct control of the sockets and
   no framedrop handling means that FFmpeg is definitely not something
   you want to use without wrapper code.  I'd prefer writing my own
   network framework in this particular case just because you give away
   so much control of the network interface.  Wasted an entire day
   trying to go through FFmpeg issues.

   There's just no way FFmpeg should be used for real streaming (at
   least without being patched or submitting some sort of patch, but I'm
   sort of feeling "meh" on that idea)

   I had to end up writing multiple threads just to handle both
   connecting and writing, because av_interleaved_write_frame blocks
   every call, stalling the main encoder thread, and thus also stalling
   draw signals.

 - Add some temporary user interface for streaming settings.  This is
   just temporary for the time being.  It's in the outputs section of
   the basic-mode settings

 - Make it so that dynamic arrays do not free all their data when the
   size just happens to be reduced to 0.  This prevents constant
   reallocation when an array keeps going from 1 item to 0 items.  Also,
   it was bad to become dependent upon that functionality.  You must now
   always explicitly call "free" on it to ensure the data is free, and
   that's how it should be.  Implicit functionality can lead to
   confusion and maintainability issues.
2014-03-10 13:10:35 -07:00
jp9000
60e6316a5e Separate source activation for main/aux views
Split off activate to activate and show callbacks, and split off
deactivate to deactivate and hide callbacks.  Sources didn't previously
have a means to know whether it was actually being displayed in the main
view or just happened to be visible somewhere.  Now, for things like
transition sources, they have a means of knowing when they have actually
been "activated" so they can initiate their sequence.

A source is now only considered "active" when it's being displayed by
the main view.  When a source is shown in the main view, the activate
callback/signal is triggered.  When it's no longer being displayed by
the main view, deactivate callback/signal is triggered.

When a source is just generally visible to see by any view, the show
callback/signal is triggered.  If it's no longer visible by any views,
then the hide callback/signal is triggered.

Presentation volume will now only be active when a source is active in
the main view rather than also in auxilary views.

Also fix a potential bug where parents wouldn't properly increment or
decrement all the activation references of a child source when a child
was added or removed.
2014-02-23 17:46:00 -07:00
jp9000
be81276f03 Implement volume handling
- Remove obs_source::type because it became redundant now that the
   type is always stored in the obs_source::info variable.

 - Apply presentation volumes of 1.0 and 0.0 to sources when they
   activate/deactivate, respectively.  It also applies that presentation
   volume to all sub-sources, with exception of transition sources.
   Transition sources must apply presentation volume manually to their
   sub-sources with the new transition functions below.

 - Add a "transition_volume" variable to obs_source structure, and add
   three functions for handling volume for transitions:

   * obs_transition_begin_frame
   * obs_source_set_transition_vol
   * obs_transition_end_frame

   Because the to/from targets of a transition source might both contain
   some of the same sources, handling the transitioning of volumes for
   that specific situation becomes an issue.

   So for transitions, instead of modifying the presentation volumes
   directly for both sets of sources, we do this:

   - First, call obs_transition_begin_frame at the beginning of each
     transition frame, which will reset transition volumes for all
     sub-sources to 0.  Presentation volumes remain unchanged.

   - Call obs_source_set_transition_vol on each sub-source, which will
     then add the volume to the transition volume for each source in
     that source's tree.  Presentation volumes still remain unchanged.

   - Then you call obs_trandition_end_frame when complete, which will
     then finally set the presentation volumes to the transition
     volumes.

   For example, let's say that there's one source that's within both the
   "transitioning from" sources and "transition to" sources.  It would
   add both the fade in and fade out volumes to that source, and then
   when the frame is complete, it would set the presentation volume to
   the sum of those two values, rather than set the presentation volume
   for that same source twice which would cause weird volume jittering
   and also set the wrong values.
2014-02-21 19:41:38 -07:00
jp9000
d4f1eacc1f Implement source activation/deactivation
Now sources will be properly activated and deactivated when they are in
use or not in use.

Had to figure out a way to handle child sources, and children of
children, just ended up implementing simple functions that parents use
to signal adding/removal to help with hierarchial activation and
deactivation of child sources.

To prevent the source activate/deactivate callbacks from being called
more than once, added an activation reference counter.  The first
increment will call the activate callback, and the last decrement will
call the deactivate callback.

Added "source-activate" and "source-deactivate" signals to the main obs
signal handler, and "activate" and "deactivate" to individual source
signal handlers.

Also, fixed the main window so it properly selects a source when the
current active scene has been changed.
2014-02-20 22:04:14 -07:00
jp9000
14c95ac421 Add functions to enumerate source children/tree 2014-02-20 17:44:42 -07:00
jp9000
d6ec5438a8 Add source audio sync offset setting 2014-02-20 16:16:25 -07:00
jp9000
579f026c5b Add more volume options
Added a "master" volume for the entire audio subsystem.

Also, added a "presentation" volume for both the master volume and for
each invidiaul source.  The presentation volume is used to control
things like transitioning volumes, preventing sources from outputting
any audio when they're inactive, as well as some other uses in the
future.
2014-02-20 15:56:51 -07:00
jp9000
2dbbffe4a2 Make a number of key optimizations
- Changed glMapBuffer to glMapBufferRange to allow invalidation.  Using
   just glMapBuffer alone was causing some unacceptable stalls.

 - Changed dynamic buffers from GL_DYNAMIC_WRITE to GL_STREAM_WRITE
   because I had misunderstood the OpenGL specification

 - Added _OPENGL and _D3D11 builtin preprocessor macros to effects to
   allow special processing if needed

 - Added fmod support to shaders (NOTE: D3D and GL do not function
   identically with negative numbers when using this.  Positive numbers
   however function identically)

 - Created a planar conversion shader that converts from packed YUV to
   planar 420 right on the GPU without any CPU processing.  Reduces
   required GPU download size to approximately 37.5% of its normal rate
   as well.  GPU usage down by 10 entire percentage points despite the
   extra required pass.
2014-02-16 19:28:21 -07:00
jp9000
4bc282f5e9 Rename obs_viewport to obs_view
I felt like the name could cause a bit of confusion with typical
graphics viewports, so I just changed it to view instead.
2014-02-13 10:21:16 -07:00
jp9000
515f44be8e Revamp rendering system to allow custom rendering
Originally, the rendering system was designed to only display sources
and such, but I realized there would be a flaw; if you wanted to render
the main viewport in a custom way, or maybe even the entire application
as a graphics-based front end, you wouldn't have been able to do that.

Displays have now been separated in to viewports and displays.  A
viewport is used to store and draw sources, a display is used to handle
draw callbacks.  You can even use displays without using viewports to
draw custom render displays containing graphics calls if you wish, but
usually they would be used in combination with source viewports at
least.

This requires a tiny bit more work to create simple source displays, but
in the end its worth it for the added flexibility and options it brings.
2014-02-13 08:58:31 -07:00
jp9000
8e81d8be56 Revamp API and start using doxygen
The API used to be designed in such a way to where it would expect
exports for each individual source/output/encoder/etc.  You would export
functions for each and it would automatically load those functions based
on a specific naming scheme from the module.

The idea behind this was that I wanted to limit the usage of structures
in the API so only functions could be used.  It was an interesting idea
in theory, but this idea turned out to be flawed in a number of ways:

 1.) Requiring exports to create sources/outputs/encoders/etc meant that
     you could not create them by any other means, which meant that
     things like faruton's .net plugin would become difficult.

 2.) Export function declarations could not be checked, therefore if you
     created a function with the wrong parameters and parameter types,
     the compiler wouldn't know how to check for that.

 3.) Required overly complex load functions in libobs just to handle it.
     It makes much more sense to just have a load function that you call
     manually.  Complexity is the bane of all good programs.

 4.) It required that you have functions of specific names, which looked
     and felt somewhat unsightly.

So, to fix these issues, I replaced it with a more commonly used API
scheme, seen commonly in places like kernels and typical C libraries
with abstraction.  You simply create a structure that contains the
callback definitions, and you pass it to a function to register that
definition (such as obs_register_source), which you call in the
obs_module_load of the module.

It will also automatically check the structure size and ensure that it
only loads the required values if the structure happened to add new
values in an API change.

The "main" source file for each module must include obs-module.h, and
must use OBS_DECLARE_MODULE() within that source file.

Also, started writing some doxygen documentation in to the main library
headers.  Will add more detailed documentation as I go.
2014-02-12 08:04:50 -07:00
jp9000
6c92cf5841 Implement output, improve video/audio subsystems
- Fill in the rest of the FFmpeg test output code for testing so it
   actually properly outputs data.

 - Improve the main video subsystem to be a bit more optimal and
   automatically output I420 or NV12 if needed.

 - Fix audio subsystem insertation and byte calculation.  Now it will
   seamlessly insert new audio data in to the audio stream based upon
   its timestamp value.  (Be extremely cautious when using floating
   point calculations for important things like this, and always round
   your values and check your values)

 - Use 32 byte alignment in case of future optimizations and export a
   function to get the current alignment.

 - Make os_sleepto_ns return true if slept, false if the time has
   already been passed before the call.

 - Fix sinewave output so that it actually properly calculates a middle
   C sinewave.

 - Change the use of row_bytes to linesize (also makes it a bit more
   consistent with FFmpeg's naming as well)
2014-02-09 05:51:06 -07:00
jp9000
89cfbdc033 Improve naming scheme of libobs core structures 2014-02-05 21:03:06 -07:00
jp9000
ab4ab95790 Implement output scaling/conversion/downloading
- Implement texture scaling/conversion/downloading for the main view so
  we can finally start getting data to output.

  Also, redesign how it works a bit, it will now properly wait one full
  frame for each step in the process:  rendering the main texture,
  scaling the main texture to an output texture, staging/downloading the
  ouput texture, and then outputting that staged data.  This way, the
  GPU will have more than enough time to fully complete each step.

- Fix a bug with OpenGL plugin's texture staging function.  Was using
  glBindBuffer instead of what should have been used:  glBindTexture.

- Change the naming scheme of the variables in default.effect.  It's now
  named with the idea of just "color matrix" in mind instead of "yuv
  matrix", and instead of DrawRGBToYUV, it's now just DrawMatrix.
2014-02-05 20:36:21 -07:00
jp9000
6f51567c93 Simplify/improve UI exporting a bit more
Reduce and simplify the UI export interface.  Having to export functions
with designated names was a bit silly for this case, it makes more sense
for inputs/outputs/etc because they have more functions associated with
them, but in this case the callback can be retrieved simply through the
enumeration exports.  Makes it a bit easier and a little less awkward
for this situation.

Also, changed the exports and names to be a bit more consistent,
labelling them both as either "modal" or "modeless", and changed the UI
function calls to obs_exec_ui and obs_create_ui to imply modal/modeless
functionality a bit more.
2014-02-01 17:43:32 -07:00
jp9000
b31d52d602 Add support for modeless UI creation
I realized that I had intended modeless UI to be usable by plugins, but
it had been pointed out to me that modeless really needs to return a
pointer/handle to the user interface object that was created.
2014-02-01 12:48:35 -07:00
jp9000
a12656bd91 Add module UI export capability
Add the ability to be able to call and use toolkit-specific or
program-specific user interface in modules.

User interface code can be either bundled with the module, or 'split'
out in to separate libraries (recommended).

There are three reasons why splitting is recommended:

  1.) It allows plugins to be able to create custom user interface for
      each toolkit if desired.

  2.) Often, UI will be programmed in one language (the language of the
      toolkit), and core logic may be programmed in another.  This
      allows plugins to keep the languages separated if necessary.

  3.) It prevents direct linkage of UI toolkits libraries with core
      module logic.

Splitting is not required, though is recommended if you want your plugin
to be more flexible with other user interface toolkits or programs.

Will implement a generic properties lookup next, which will be used for
automatic UI handling so that plugin UI isn't necessarily required.
2014-02-01 00:49:50 -07:00
jp9000
de288ac541 Rename obs_data structure to obs_program_data
I'm doing this because I might create another data structure called
obs_data for a different purpose.  That and obs_program_data feels a bit
less vague for what it does.
2014-01-27 09:07:13 -07:00
jp9000
563613db8f Rename obs-data.h to obs-internal.h
Renaming obs-data.h to avoid confusion about its usage
2014-01-26 18:48:14 -07:00