0
0
mirror of https://github.com/mpv-player/mpv.git synced 2024-09-20 20:03:10 +02:00
Commit Graph

915 Commits

Author SHA1 Message Date
wm4
3db8715f70 manpage: minor fixes
In particular, all_formats description split away the example section of
an unrelated option, so move that to its proper place.
2020-02-19 23:11:02 +01:00
wm4
6d8b4ca742 ytdl_hook: add all_formats option
Pretty worthless I guess. I only tested one site (and 2 videos), it's
somewhat likely that it will break with other sites. Even if you leave
the option disabled (the default).

Slightly related to #3548. This will allows you to use the bitrate
stream selection mechanism, that was added for HLS, with normal videos.
2020-02-19 16:33:48 +01:00
wm4
43c13e5ea2 ytdl_hook: add a way to not pass --format to the command line
Might be helpful for... whatever.
2020-02-19 16:31:04 +01:00
wm4
a4eb8f75c0 sub: add an option to filter subtitles by regex
Works as ad-filter. I had some more plans, for example replacing
matching text with different text, but for now it's dropping matches
only. There's a big warning in the manpage that I might change
semantics. For example, I might turn it into a primitive sed.

In a sane world, you'd probably write a simple script that processes
downloaded subtitles before giving them to mpv, and avoid all this
complexity. But we don't live in a sane world, and the sooner you learn
this, the happier you will be. (But I also want to run this on muxed
subtitles.)

This is pretty straightforward. We use POSIX regexes, which are readily
available without additional pain or dependencies. This also means it's
(apparently) not available on win32 (MinGW). The regex list is because I
hate big monolithic regexes, and this makes it slightly better.

Very superficially tested.
2020-02-16 02:07:24 +01:00
wm4
641d102101 msg: slightly improve --msg-time output
Cut the arbitrary offset, and document what unit/timesource it uses.
2020-02-14 16:12:37 +01:00
wm4
777c046b35 manpage: clarify --player-operation-mode
options.rst to clarify the option, some more text in mpv.rst to separate
out the compatibility stuff a little.

Fixes: #7461 (options.rst change only)
2020-02-14 12:58:45 +01:00
wm4
c3f93f5fdd sws_utils: use zimg by default if available
This seems stable enough to use. Change the default, and remove it from
the sw-fast profile.
2020-02-12 18:06:53 +01:00
wm4
a8404ed0a0 manpage: add some blabla about zimg speed vs. libswscale
Of course nobody will read this. I'm just putting it there so I can
blame users, who run into problems, for not having read it.
2020-02-10 18:59:34 +01:00
wm4
e9fc53a10b player: add ab-loop-count option/property
As requested I guess. It behaves quite similar to the --loop* options.

Not quite happy with the idea that 1) the option is mutated on each
operation (but at least it's consistent with --loop* and doesn't require
more properties), and 2) the ab-loop command will do nothing once all
loop iterations are done. As a concession, the OSD shows something about
"disabled".

Fixes: #7360
2020-02-08 15:01:33 +01:00
der richter
2607a2b892 mac: activate logging when started from the bundle
this creates a default log for the last mpv run when started from the
bundle. that way one can get a log of what happened even after an issue
occurred. also add a menu entry under Help to show the current log, but
only when the bundle is used.

Fixes #7396
Fixes #2547
2020-02-08 10:55:07 +01:00
wm4
27d5d32020 demux: add option to disable "sharing" between back and forward buffers
As requested. I guess option name and manpage text could be better and
clearer.

Closes: #7442
2020-02-07 15:58:13 +01:00
wm4
3d17e19c2c options: disable vsfilter blur compat by default
See #7435 and related for context.

Basically, it seems that while the original vsfilter processed subtitles
like with this option set to "yes", many current players (mpc-hc
default, vlc, probably most libass users) treat them like with "no". In
the linked issue, this makes rendering severely slower, and can consume
a lot of memory (or just overflow libass memory calculations). It seems
that changing this to "no" will lead to more good than bad, especially
because newer subtitles may be authored for the "no" behavior.

Most libass users seem to use "no" exactly because they do not call
ass_set_storage_size() at all. This API was needed because the scaling
of the subtitles depends on the video size (vsfilter bugs, or
something). In addition, it's my personal opinion that rendering should
not depend on the video at all, so I like setting the default of this to
"no".
2020-02-07 00:50:25 +01:00
wm4
77a74d9eb5 manpage: --sub-codepage cannot do muxed subs
mpv actually used to be able to, from what I remember, but this was
changed for simplicity and because of problems with FFmpeg.
2020-02-01 18:52:30 +01:00
dudemanguy
b926f18938 wayland: remove wayland-frame-wait-offset option
This originally existed as a hack for weston. In certain scenarios, a
frame taking too long to render would cause vo_wayland_wait_frame to
timeout which would result in a ton of dropped frames. The naive
solution was to just to add a slight delay to the time value. If a
frame took too long, it would likely to fall under the timeout value and
all was well. This was exposed to the user since the default delay
(1000) was completely arbitrary.

However with presentation time, this doesn't appear to be neccesary.
Fresh frames that take longer than the display's refresh rate (16.666 ms
in most cases) behave well in Weston. In the other two main compositors
without presentation time (GNOME and Plasma), they also do not
experience any ill effects. It's better not to overcomplicate things, so
this "feature" can be removed now.
2020-01-31 00:40:44 +00:00
der richter
cbfcd3e703 manpage: update force dedicated gpu on macOS option
was forgotten in commit 3275cd0
2020-01-27 01:01:29 +01:00
wm4
1b283f6b60 libarchive: some shitty hack to make opening slightly faster
See manpage additions. The libarchive behavior mentioned in the last
paragraph there is technically unrelated, but makes this new option
mostly pointless.

See: #7182
2020-01-04 19:56:09 +01:00
Philip Langdale
5fcad696a9 manpage: update discussion of nvidia hardware acceleration
The text here has become somewhat outdated over the years, and it's
worth updating to reflect the current situation.
2019-12-29 15:09:46 -08:00
Philip Langdale
9c05be8999 video: cuda: add explicit context creation for copy hwaccels
In the distant past, the cuviddec backed copy hwaccel could be
configured directly using lavc options. However, since that time,
we gained support for automatic hw ctx creation which ended up
bypassing the lavc options.

Rather than trying to find a way to pass those options again, a
better idea is to make the 'cuda-decode-device' option, used by
the interop hwaccels, work for the copy hwaccels too.

And that's pretty simple: we have to add a create function that
checks the option and passes it on to ffmpeg.

Note that this does require a slight re-jig to the configuration
flags, as we now have a scenario where we want to build with support
for the cuda copy hwaccels but not the interop ones. So we need
a distinct configuration flag for that combination.

Fixes #7295.
2019-12-29 14:32:47 -08:00
wm4
b6d7d246fe manpage: fix example in --hwdec section 2019-12-24 16:02:49 +01:00
wm4
380f01567d vd_lavc: more hwdec autoselect nonsense
Add an "auto-safe" mode, mostly triggered by Ubuntu's nonsense to force
hwdec=vaapi in the global config file in their mpv package. But to be
honest it's probably something more people want.

This is implemented as explicit whitelist. On Windows, HEVC/Intel is
sometimes broken, but it's still whitelisted, and in theory we'd need a
detailed whitelist of device names etc. (like for example browsers tend
to do). On OSX, videotoolbox is a pretty bad choice, but unfortunately
the only one, so it's whitelisted too. There may be a larger number of
hwdec wrappers that work anyway, and I'm for example ignoring Android.
2019-12-24 09:24:22 +01:00
Nicolas F
93a6308bb7 video/out/x11: add fs-screen fallback
Apparently there are two different options for controlling which
screen an mpv window goes onto: --fs-screen and --screen. The former
explicitly only controls which screen a fullscreened window goes onto,
but does not appear to actually care about this option at runtime for
X11, so pressing f will always fullscreen to the screen mpv is currently
on. This means the option is of questionable usefulness for starters.

Making it worse, if you use --screen=1 --fs, mpv will actually fullscreen
on screen 0, because --fs-screen isn't set. Instead of doing that, fall
back to whatever --screen is set to.
2019-12-22 02:33:48 +01:00
wm4
8448fe0b62 demux: add an option to control tag charset
Fucking gross that you need this in almost-2020.

Fixes: #7255
2019-12-20 13:00:39 +01:00
wm4
d3e3bd4307 options: increase consistency between list options and document them
Whenever I deal with this, I have to look at the code to make sense of
this. And beyond that, there are some strange inconsistencies. (I think
this code is cursed. It always was, and maybe always will be.)

Although the manpage claimed that using multiple items for -add etc. is
deprecated, string list options didn't warn against it. So add the
warning, and add something in the changelog (even though nobody will
ever read this).

The manpage mentioned --vf-append, but this didn't even exist. So add
it, I guess. We encourage using -append for the other option types, so
for consistency, it should work on filter options. (And I already
tricked me into believing it existed when I mentioned it in the
manpage.)

Make the "operations" table separate for all option types, and mention
the option type on every single of the top-level list options.
2019-12-18 05:32:02 +01:00
der richter
8a6ee7fe94 mac: remove Apple Remote support
the Apple Remote has long been deprecated and abandoned by Apple.
current macs don't come with support for it anymore. support might be
re-added with the next commit.
2019-12-15 20:07:31 +01:00
wm4
aee413d246 manpage: fix --vulkan-async-compute default value
Seems like this was silently changed to enabled by default on the change
to libplacebo, without adjusting the manpage. Fix the documented
default.

Also add a comment about Nvidia; see referenced issue.

Fixes: #7245
2019-12-12 12:46:59 +01:00
James Ross-Gowan
b3b2cc44fa console.lua: add this script
Merged from mpv-repl git repo commit 5ea2bf64f9c239f0326b02. Some
changes were made on top of it:

- Tabs were converted to 4 spaces indentation (plus some manual
  indentation fixes in some places).
- All user-visible mentions of "repl" were renamed to "console".
- The README was converted to a manpage (with heavy changes, some
  additions taken from stats.rst; rossy converted the key bindings
  table to RST).
- The method to change the default key binding was changed.
- Change minor detail about "font" default value setting (not a
  functional change).
- Integrate into the player as builtin script, including an option to
  prevent loading it.

Above changes and commit message done by wm4.

Signed-off-by: wm4 <wm4@nowhere>
2019-12-08 02:46:44 +01:00
dudemanguy
122bbb9ff4 DOCS: fix wayland-frame-wait offset value range
It actually goes down to -500 not -100.
2019-12-05 14:35:23 -06:00
wm4
3f7556baef x11: implement unminimization
This appears to work with IceWM.
2019-11-29 14:27:27 +01:00
wm4
40c2f2eeb0 command: change window-minimized/window-maximized to options
Unfortunately, this breaks window state reporting for all VOs which
supported it. This can be fixed later (for x11 in the next commit).
2019-11-29 13:56:58 +01:00
wm4
3a2dc8b22e command, options: deprecate old --display-fps behavior
See changelog and manpage changes.

(So much effort to fix an ancient dumb mistake for an option nobody
should use anyway.)
2019-11-25 00:47:53 +01:00
Chris Down
e143966a76 player: Optionally validate st_mtime when restoring playback state
I often watch sporting events. On many occasions I get files with the
same filename for each session. For example, for F1 I might have the
following directory structure:

    F1/
        FP1.mkv
        FP2.mkv
        FP3.mkv
        Qualification.mkv
        Race.mkv

Since usually one simply watches one race after the other, I usually
just rsync the new event's files over the old ones, so, for example,
Race.mkv will be replaced from the file for the last event with the file
from the new event.

One problem with this is that I like to use --resume-playback for other
kinds of media, so I have it on by default. That works great for, say, a
movie, but doesn't work so well with this scheme, because you can
trivially forget to pass --no-resume-playback on the command line and
end up 2 hours in, watching spoilers as the race results scroll down the
screen :-)

This patch adds a new option, --resume-playback-check-mtime, which
validates that the file's mtime hasn't changed since the watch_later
configuration was saved. It does this by setting the watch_later
configuration to have the same mtime as the file after it is saved.

Switching back and forth between checking mtime and not checking mtime
works fine, as we only choose whether to compare based on it, but we
update the watch_later configuration mtime regardless of its value.
2019-11-20 15:11:33 +01:00
wm4
f57f13ceb0 options: deprecate --input-file
I have no idea why this still exists, since we have --input-ipc-server.
I think there was something about Windows, but the latter option is
implemented even on Windows.
2019-11-16 15:28:18 +01:00
wm4
b6413f82b2 demux_lavf: fight ffmpeg API some more and get the timeout set
It sometimes happens that HLS streams freeze because the HTTP server is
not responding for a fragment (or something similar, the exact
circumstances are unknown). The --timeout option didn't affect this,
because it's never set on HLS recursive connections (these download the
fragments, while the main connection likely nothing and just wastes a
TCP socket).

Apply an elaborate hack on top of an existing elaborate hack to somehow
get these options set. Of course this could still break easily, but hey,
it's ffmpeg, it can't not try to fuck you over. I'm so fucking sick of
ffmpeg's API bullshit, especially wrt. HLS.

Of course the change is sort of pointless. For HLS, GET requests should
just aggressively retried (because they're not "streamed", they're just
actual files on a CDN), while normal HTTP connections should probably
not be made this fragile (they could be streamed, i.e. they are backed
by some sort of real time encoder, and block if there is no data yet).
The 1 minute default timeout is too high to save playback if this
happens with HLS.

Vaguely related to #5793.
2019-11-16 13:15:45 +01:00
wm4
5a99015acf stream_lavf: set --network-timeout to 60 seconds by default
Until now, we've made FFmpeg use the default network timeout - which is
apparently infinite. I don't know if this was changed at some point,
although it seems likely, as I was sure there was a more useful default.

For most use cases, a smaller timeout is more useful (for example
recording something in the background), so force a timeout of 1 minute.

See: #5793
2019-11-14 13:46:03 +01:00
dudemanguy
dcc3c2eb38 wayland: use hidpi-window-scale option 2019-11-12 01:00:08 +00:00
wm4
fb56896319 test: make tests part of the mpv binary
Until now, each .c file in test/ was built as separate, self-contained
binary. Each binary could be run to execute the tests it contained.

Change this and make them part of the normal mpv binary. Now the tests
have to be invoked via the --unittest option. Do this for two reasons:

- Tests now run within a "properly" initialized mpv instance, so all
  services are available.
- Possibly simplifying the situation for future build systems.

The first point is the main motivation. The mpv code is entangled with
mp_log and the option system. It feels like a bad idea to duplicate some
of the initialization of this just so you can call code using them.

I'm also getting rid of cmocka. There wouldn't be any problem to keep it
(it's a perfectly sane set of helpers), but NIH calls. I would have had
to aggregate all tests into a CMUnitTest list, and I don't see how I'd
get different types of entry points easily. Probably easily solvable,
but since we made only pretty basic use of this library, NIH-ing this is
actually easier (I needed a list of tests with custom metadata anyway,
so all what was left was reimplement the assert_* helpers).

Unit tests now don't output anything, and if they fail, they'll simply
crash and leave a message that typically requires inspecting the test
code to figure out what went wrong (and probably editing the test code
to get more information). I even merged the various test functions into
single ones. Sucks, but here you go.

chmap_sel.c is merged into chmap.c, because I didn't see the point of
this being separate. json.c drops the print_message() to go along with
the new silent-by-default idea, also there's a memory leak fix unrelated
to the rest of this commit.

The new code is enabled with --enable-tests (--enable-test goes away).
Due to waf's option parser, --enable-test still works, because it's a
unique prefix to --enable-tests.
2019-11-08 00:26:37 +01:00
wm4
1c8d2246bf vo_gpu: vdpau actually works under EGL
The use of glXGetCurrentDisplay() restricted this to the GLX backend.
But actually it works under EGL as well. Removing the GLX-specific call
and using the general mpv-internal method to get the X "Display" makes
it work in mpv.

I didn't know this. Nvidia didn't list this as extension in the EGL
context when I still used their GPUs.

Note that this might in theory break use of vdpau in some libmpv clients
using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not
used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not
provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses
now. Considering that vaapi is preferable these days, it's not bad at
all if these clients get "broken". They can be easily fixed by passing
the display to mpv correctly.
2019-11-07 22:53:13 +01:00
wm4
17a89e5778 manpage: vdpauglx backend was removed
A while ago. It was 100% useless.
2019-11-07 22:53:13 +01:00
wm4
e8aae688c3 stream: bump default buffer size from 2K to 64K
(Only half of the buffer is actually used in a useful way, see manpage
or commit which added the option.)

Might have some advantages with broken network filesystem drivers.

See: #6802
2019-11-06 21:57:31 +01:00
wm4
f37f4de849 stream: turn into a ring buffer, make size configurable
In some corner cases (see #6802), it can be beneficial to use a larger
stream buffer size. Use this as argument to rewrite everything for no
reason.

Turn stream.c itself into a ring buffer, with configurable size. The
latter would have been easily achievable with minimal changes, and the
ring buffer is the hard part. There is no reason to have a ring buffer
at all, except possibly if ffmpeg don't fix their awful mp4 demuxer, and
some subtle issues with demux_mkv.c wanting to seek back by small
offsets (the latter was handled with small stream_peek() calls, which
are unneeded now).

In addition, this turns small forward seeks into reads (where data is
simply skipped). Before this commit, only stream_skip() did this (which
also mean that stream_skip() simply calls stream_seek() now).

Replace all stream_peek() calls with something else (usually
stream_read_peek()). The function was a problem, because it returned a
pointer to the internal buffer, which is now a ring buffer with
wrapping. The new function just copies the data into a buffer, and in
some cases requires callers to dynamically allocate memory. (The most
common case, demux_lavf.c, required a separate buffer allocation anyway
due to FFmpeg "idiosyncrasies".) This is the bulk of the demuxer_*
changes.

I'm not happy with this. There still isn't a good reason why there
should be a ring buffer, that is complex, and most of the time just
wastes half of the available memory. Maybe another rewrite soon.

It also contains bugs; you're an alpha tester now.
2019-11-06 21:36:02 +01:00
wm4
872df1e06f manpage: opengl-cb -> libmpv
This was renamed ages ago. Fix the outdated usage. Except where
opengl-cb was correct.
2019-11-04 16:17:07 +01:00
wm4
f043d73405 manpage: fix global config file path in --hwdec description 2019-11-04 00:48:03 +01:00
wm4
a4f92cef1a manpage: shovel around --hwdec description (again)
Not like anyone reads it. Although putting all this text before listing
the allowed option values sort of has the intention to discourage users
from using the option at all. Advertise Ctrl+h, which is a decent way of
enabling hardware decoding temporarily.
2019-11-04 00:01:05 +01:00
wm4
67e17f1104 vd_lavc: don't keep packets for fallbacks if errors are tolerated
The user can raise the number of tolerated hardware decoding errors. On
the other hand, we have a static limit on packets that are "saved" for
fallback handling (and that's a good idea to avoid unbounded memory
usage). In this case, it could happen that the start of a file was fine
after a fallback, but after that buffered amount of data, it would
suddenly skip.

It's more useful to skip buffering entirely if the number of tolerated
decoding errors exceeds the fixed buffer.

(And also, I'm sure nobody gives a shit about this feature.)
2019-11-02 23:00:49 +01:00
wm4
985a1cde5a manpage: update --framedrop option
The statement about the display FPS is outdated by several years.
"audio"-sync mode does not use the display FPS anymore, and that it's
X11 only also isn't true anymore.

These modes have separate implementations for audio and display video
sync. modes, so the explanations are separate.

Why the hell are users playing around with this anyway? The explanations
are probably too special to make sense for anyone who doesn't know the
code (and who knows the code doesn't need them anyway), but whatever.
2019-11-02 14:29:24 +01:00
wm4
00838fe0c3 zimg: make --zimg-fast=yes default
This is mostly just because of the odd RGB default gamma issue, which
shouldn't have any real impact. This also sets allow_approximate_gamma,
which I hope is fine for normal use cases.
2019-11-02 02:22:16 +01:00
wm4
706e708d2f options: make --show-profile without parameters list all profiles 2019-10-31 17:32:57 +01:00
wm4
e96ab5becb manpage: fix another typo 2019-10-31 17:27:17 +01:00
wm4
9e0b0be8ee manpage: update --zimg-scaler default
Forgotten in previous commit.
2019-10-31 17:01:31 +01:00
wm4
a7230dfed0 sws_utils, zimg: destroy vo_x11 and vo_drm performance
Raise swscale and zimg default parameters. This restores screenshot
quality settings (maybe) unset in the commit before. Also expose some
more libswscale and zimg options.

Since these options are also used for VOs like x11 and drm, this will
make x11/drm/etc. much slower. For compensation, provide a profile that
sets the old option values: sw-fast. I'm also enabling zimg here, just
as an experiment.

The core problem is that we have a single set of command line options
which control the settings used for most swscale/zimg uses. This was
done in the previous commit. It cannot differentiate between the VOs,
which need to be realtime and may accept/require lower quality options,
and things like screenshots or vo_image, which can be slower, but should
not sacrifice quality by default.

Should this have two sets of options or something similar to do the
right thing depending on the code which calls libswscale? Maybe. Or
should I just ignore the problem, make it someone else's problem (users
who want to use software conversion VOs), provide a sub-optimal
solution, and call it a day? Definitely, sounds good, pushing to master,
goodbye.
2019-10-31 16:51:12 +01:00
Jan Ekström
fc29620ec8 vo_gpu/d3d11: add support for configuring swap chain color space
By default utilizes the color space of the desktop on which the
swap chain is located. If a specific value is defined, it will be
instead be utilized.

Enables configuration of the PQ color space (BT.2020 primaries,
PQ transfer function) for HDR.

Additionally, signals the swap chain color space to the renderer,
so that the render looks correct without having to specify
target-trc or target-prim manually.

Due to all of the APIs being Win10+ only, will only work starting
with Windows 10.
2019-10-30 02:41:25 +02:00
wm4
4a82349900 input: disable gamepad code by default
Enabling this by default probably causes a number of issues, such as
breaking vo_sdl, or reacting to various input devices while the window
is not focused. It's also pretty obscure, or at least new. Disable it by
default.
2019-10-25 21:54:35 +02:00
wm4
e67386e50b manpage: fix --script docs
This doesn't take a ',' separated list. --script is just an alias for
--scripts--append. --scripts accepts a list, but uses the
mplayer-inherited platform-dependent path separator.

Fixes: #5996
2019-10-25 13:41:34 +02:00
wm4
77f309c94f vo_gpu, options: don't return NaN through API
Internally, vo_gpu uses NaN for some options to indicate a default value
that is different depending on the context (e.g. different scalers).
There are 2 problems with this:

1. you couldn't reset the options to their defaults
2. NaN is a damn mess and shouldn't be part of the API

The option parser already rejected NaN explicitly, which is why 1.
didn't work. Regarding 2., JSON might be a good example, and actually
caused a bug report.

Fix this by mapping NaN to the special value "default". I think I'd
prefer other mechanisms (maybe just having every scaler expose separate
options?), but for now this will do. See you in a future commit, which
painfully deprecates this and replaces it with something else.

I refrained from using "no" (my favorite magic value for "unset" etc.)
because then I'd have e.g. make --no-scale-param1 work, which in
addition to a lot of effort looks dumb and nobody will use it.

Here's also an apology for the shitty added test script.

Fixes: #6691
2019-10-25 00:25:05 +02:00
Stefano Pigozzi
899e0bd16b input: add gamepad support through SDL2
The code is very basic:

- only handles gamepads, could be extended for generic joysticks in the
  future.
- only has button mappings for controllers natively supported by SDL2.
  I heard more can be added through env vars, there's also ways to load
  mappings from text files, but I'd rather not go there yet. Common ones
  like Dualshock are supported natively.
- analog buttons (TRIGGER and AXIS) are mapped to discrete buttons using an
  activation threshold.
- only supports one gamepad at a time. the feature is intented to use
  gamepads as evolved remote controls, not play multiplayer games in mpv :)
2019-10-23 09:40:30 +02:00
dudemanguy
027ca4fb85 wayland: add various render-related options
The newest wayland changes have some new logic that make sense to expose
to users as configurable options.
2019-10-20 15:34:57 +00:00
wm4
51e141f7ba sws_utils: hack in zimg redirection support
Awful shit. I probably wouldn't accept this code from someone else, just
so you know.

The idea is that a sws_utils user can automatically use zimg without
large code changes. Basically, laziness. Since zimg support is still
very new, and I don't want that anything breaks just because zimg was
enabled at build time, an option needs to be set to enable it. (I have
especially especially obscure stuff in mind, which is all what
libswscale is used in mpv.)

This _still_ doesn't cause zimg to be used anywhere, because the
sws_utils user has to opt-in by setting allow_zimg. This is because some
users depend on certain libswscale features.
2019-10-20 02:17:31 +02:00
wm4
07aa29ed8e video: add zimg wrapper
This provides a very similar API to sws_utils.h, which can be used to
convert and scale from one mp_image to another.

This commit adds only the code, but does not use it anywhere.

The code is quite preliminary and barely tested. It supports only a few
pixel formats, and will return failure for many others. (Unlike
libswscale, which tries to support anything that FFmpeg knows.)

zimg itself accepts only planar formats. Supporting other formats
requires manual packing/unpacking. (Compared to libswscale, the zimg API
is generally lower level, but allows for more flexibility.) Only BGR0
output was actually tested. It appears to work.
2019-10-20 02:17:31 +02:00
wm4
ad97a74940 manpage: fix a typo 2019-10-18 15:36:31 +02:00
wm4
273cc3055c video: do not disable display-sync on A/V desync
On a audio/video desync by more than 0.5 seconds, display-sync mode was
disabled, and not enabled again (until playback restart, e.g. a seek).

The idea was that it this only happens when this playback mode is broken
and can't perform well anyway (A/V desync is a clear indication that
something is very wrong). Instead of behaving like a god damn POS, it
should revert to the more robust audio-sync mode.

Unfortunately, this could happen sporadically due to temporary system
performance problems, such as toggling fullscreen. Users didn't like
this, and asked for a function to disable it, or to recover in some
other way.

This mechanism is questionable anyway. If an ignorant user enables
display-sync, and encounters problems with it (without being able to
determine that display-sync is messing up), the player will still behave
like a POS on every playback, and even after every seek. It might
actually be helpful to fail more consistently. Also, I've found that
it's sill relatively reliable anyway even without this mechanism.

So just remove the fallback.

Fixes: #7048
2019-10-17 19:23:35 +02:00
wm4
e49db40382 manpage: update --hwdec description
vdpaurb, vaapi-glx, and ANGLE's NV12-restriction are gone, making things
much simpler.
2019-10-17 11:10:40 +02:00
Jan Ekström
89f4ce9d6f vo_gpu/d3d11: switch adapter selection to case-insensitive startswith
This lets users set values such as "intel" or "nvidia" as the
adapter vendor is generally noted in the beginning of the
description string.
2019-10-15 22:12:48 +03:00
wm4
18bd768ecc manpage: attempt to remove some more cache option confusion
OK, so --cache-secs is useless, because the default is set to 10 hours.
And that part about the "maximum" was obviously a lie (I wonder if it
simply changed at some point).
2019-10-14 18:28:14 +02:00
Jan Ekström
648d785930 vo_gpu/d3d11: add support for configuring swap chain format
Query information on the system output most linked to the swap chain,
and either utilize a user-configured format, or either 8bit
RGBA or 10bit RGB with 2bit alpha depending on the system output's
bit depth.
2019-10-13 22:31:33 +11:00
wm4
9e76c203f7 DOCS: some corrections around cache options 2019-10-08 18:38:23 +02:00
wm4
e5a97ef27f audio: do not try gapless if video is still ongoing
In this case, gapless will most likely not work. It will result in (very
slight) desync, or (more commonly with small buffer sizes), in an
underflow.

I think it would be legitimate to disable gapless at end of playback
completely if video is enabled at all. But this would need an exception
for cover art mode, so I guess the current solution is OK as well.
2019-10-06 20:46:22 +02:00
Niklas Haas
cb95ce75b5 options: rename --video-aspect to --video-aspect-override
The justification for this is the fact that the `video-aspect` property
doesn't work well with `cycle_values` commands that include the value
"-1".

The "video-aspect" property has effectively no change in behavior, but
we may want to make it read-only in the future. I think it's probably
fine to leave as-is, though.

Fixes #6068.
2019-10-04 21:34:22 +02:00
Oliver Freyermuth
5b45b2fcac DOCS: Add documentation for dvbin-prog and dvbin-channel-switch-offset. 2019-10-02 01:25:45 +02:00
Jan Ekström
1f76e69145 vo_gpu/d3d11: add adapter name validation and listing with "help"
Not the prettiest way to get it done, but seems to work.
2019-09-29 19:39:26 +03:00
Jan Ekström
8163906299 video/d3d11: add adapter selection by name into d3d11 options
This lets the user define an adapter name that can then be passed
further into the internals.
2019-09-29 19:39:26 +03:00
Anton Kindestam
6290420380 vo: make swapchain-depth option generic for all VOs
In preparation for making vo_drm able to use swapchain-depth
2019-09-28 14:10:01 +03:00
Wessel Dankers
643417dd17 video: add pure gamma TRC curves for 2.0, 2.4 and 2.6. 2019-09-27 13:21:41 +02:00
der richter
41f290f54e cocoa-cb: add support for 10bit opengl rendering
this will request a 16bit half-float framebuffer instead if a 8bit
integer framebuffer.

Fixes #3613
2019-09-26 00:02:02 +02:00
wm4
ff2aed2b56 sub: make font provider user-selectable
libass had an API to configure this since 2013. mpv always used
ASS_FONTPROVIDER_AUTODETECT, because usually there's little reason to
use anything else. The intention of the now added option is to allow
users to disable use of system fonts.

I didn't consider it worth the trouble to add the coretext and
directwrite enum items from ASS_DefaultFontProvider. The "auto" choice
will have the same effect if they're available. Also, the part of the
code which defines the option does not necessarily have libass available
(it's still optional!), so defining all enum items as choices is icky. I
still added fontconfig, since that may be nice to emulate a nostalgic
2010 feeling of mpv freezing on fontconfig.

The option for OSD is even less useful. (But you get it for free, and
why pass up a chance to add yet another useless option?)

This is not quite what was requested in #6947, but as close as it gets.
2019-09-25 22:11:48 +02:00
Nicolas F
2d1d815cc7 manpage: update requirements for vdpau hwdec use
We default to EGL instead of GLX now, which means vdpau only works
if we explicitly specify that we want a GLX context, as vdpau lacks
interop for EGL.

Update the hwdec documentation to reflect this.

Concerns #6980.
2019-09-22 16:27:24 +03:00
wnoun
1c43920fb8 demux_cue: auto-detect CUE sheet charset 2019-09-21 15:18:20 +02:00
wm4
2f5dbaa832 options: deprecate --stream-record
It's inadequate for most uses. There are better mechanisms.
2019-09-19 20:37:05 +02:00
wm4
023b5964b0 demux, command: add a third stream recording mechanism
That's right, and it's probably not the end of it. I'll just claim that
I have no idea how to create a proper user interface for this, so I'm
creating multiple partially-orthogonal, of which some may work better in
each of its special use cases.

Until now, there was --record-file. You get relatively good control
about what is muxed, and it can use the cache. But it sucks that it's
bound to playback. If you pause while it's set, muxing stops. If you
seek while it's set, the output will be sort-of trashed, and that's by
design.

Then --stream-record was added. This is a bit better (especially for
live streams), but you can't really control well when muxing stops or
ends. In particular, it can't use the cache (it just dumps whatever the
underlying demuxer returns).

Today, the idea is that the user should just be able to select a time
range to dump to a file, and it should not affected by the user seeking
around in the cache. In addition, the stream may still be running, so
there's some need to continue dumping, even if it's redundant to
--stream-record.

One notable thing is that it uses the async command shit. Not sure
whether this is a good idea. Maybe not, but whatever. Also, a user can
always use the "async" prefix to pretend it doesn't.

Much of this was barely tested (especially the reinterleaving crap),
let's just hope it mostly works. I'm sure you can tolerate the one or
other crash?
2019-09-19 20:37:05 +02:00
wm4
5c0a626dee demux: allow backward cache to use unused forward cache
Until now, the following could happen: if you set a 1GB forward cache,
and a 1GB backward cache, and you opened a 2GB file, it would prune away
the data cached at the start as playback progressed past the 50% mark.

With this commit, nothing gets pruned, because the total memory usage
will still be 2GB, which equals the total allowed memory usage of 1GB +
1GB.

There are no explicit buffers (every packet is malloc'ed and put into a
linked list), so it all comes down to buffer size computations. Both
reader and prune code use these sizes to decide whether a new packet
should be read / an old packet discarded. So just add the remaining free
"space" from the forward buffer to the available backward buffer. Still
respect if the back buffer is set to 0 (e.g. unseekable cache where it
doesn't make sense to keep old packets).

We need to make sure that the forward buffer can always append, as long
as the forward buffer doesn't exceed the set size, even if the back
buffer "borrows" free space from it. For this reason, always keep 1 byte
free, which is enough to allow it to read a new packet. Also, it's now
necessary to call pruning when adding a packet, to get back "borrowed"
space that may need to be free'd up after a packet has been added.

I refrained from doing the same for forward caching (making forward
cache use unused backward cache). This would work, but has a
disadvantage. Assume playback starts paused. Demuxing will stop once the
total allowed low total cache size is reached. When unpausing, the
forward buffer will slowly move to the back buffer. That alone will not
change the total buffer size, so demuxing remains stopped. Playback
would need to pass over data of the size of the back buffer until
demuxing resume; consider this unacceptable. Live playback would break
(or rather, would not resume in unintuitive ways), even normal streaming
may break if the server invalidates the URL due to inactivity. As an
alternative implementation, you could prune the back buffer immediately,
so the forward buffer can grow, but then the back buffer would never
grow. Also makes no sense.

As far as the user interface is concerned, the idea is that the limits
on their own aren't really meaningful, the purpose is merely to vaguely
restrict the cache memory usage. There could be just a single option to
set the total allowed memory usage, but the separate backward cache
controls the default ratio of backward/forward cache sizes. From that
perspective, it doesn't matter if the backward cache uses more of the
total buffer than assigned, if the forward buffer is complete.
2019-09-19 20:37:05 +02:00
wm4
b945952e0d demux: runtime option changing for cache and stream recording
Make most of the demuxer options runtime-changeable. This includes the
cache options and stream recording. The manpage documents some of the
possibly weird issues related to this.

In particular, the disk cache isn't shuffled around if the setting
changes at runtime.
2019-09-19 20:37:05 +02:00
wm4
83d7123dc3 vo_gpu: remove mali-fbdev
Useless at this point, I don't even know if it still works, or how to
test it.
2019-09-19 20:37:05 +02:00
wm4
0b4790f23f aspect: add video margin options
Semantics a bit questionable. This is done for the OSC (next commit),
and a comment added the manpage explicitly states this. Meaning this is
probably garbage and needs to revisit when the OSC changes and/or
someone wants to use this margin feature for something else.

Not sure about the subtitle thing. It's imaginable that someone uses
these options to create empty borders for subtitles on the bottom, so
subtitles should be located there. On the other hand, this gives a
rather unpolished user experience when using the (later added) OSC
feature to not overlap with the video. There's not much of a point if
the OSC still overlaps the video. However, I'm too lazy to think about
this, so it stays like it is.
2019-09-19 20:37:05 +02:00
wm4
17da9071a4 demux: add a on-disk cache
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.

The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.

Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.

Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.

Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.

Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.

The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.

Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
2019-09-19 20:37:05 +02:00
wm4
27fcd4ddc6 demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break

This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.

At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.

This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.

This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-09-19 20:37:05 +02:00
wm4
7ed4d77a97 manpage: some more backward playback edits 2019-09-19 20:37:05 +02:00
wm4
f439064e7f demux: demux multiple audio frames in backward playback
Until now, this usually passed a single audio frame to the decoder, and
then did a backstep operation (cache seek + frame search) again. This is
probably not very efficient, especially considering it has to search the
packet queue from the "start" every time again.

Also, with most audio codecs, an additional "preroll" frame was passed
first. In these cases, the preroll frame would make up 50% of audio
decoding time. Also not very efficient.

Attempt to fix this by returning multiple frames at once. This reduces
the number of backstep operations and the ratio the preoll frames. In
theory, this should help efficiency. I didn't test it though, why would
I do this? It's just a pain. Set it to unscientific 10 frames.
(Actually, these are 10 keyframes, so it's much more for codecs like
TrueHD. But I don't care about TrueHD.)

This commit changes some other implementation details. Since we can
return more than 1 non-preroll keyframe to the decoder, some new state
is needed to remember how much. The resume packet search is adjusted to
find N ("total") keyframe packets in general, not just preroll frames.
I'm removing the special case for 1 preroll packet; audio used this, but
doesn't anymore, and it's premature optimization anyway.

Expose the new mechanism with 2 new options. They're almost completely
pointless, since nobody will try them, and if they do, they won't
understand what these options truly do. And if they actually do, they
most likely would be capable of editing the source code, and we could
just hardcode the parameters. Just so you know that I know that the
added options are pointless.

The following two things are truly unrelated to this commit, and more
like general refactoring, but fortunately nobody can stop me.

Don't set back_seek_pos in dequeue_packet() anymore. This was sort of
pointless, since it was set in find_backward_restart_pos() anyway (using
some of the same packets). The latter function tries to restrict this to
the first keyframe range though, which is an optimization that in theory
might break with broken files (duh), but in these cases a lot of other
things would be broken anyway.

Don't set back_restart_* in dequeue_packet(). I think this is an
artifact of the old restart code (cf. ad9e473c55). It can be done
directly in find_backward_restart_pos() now. Although this adds another
shitty packet search loop, I prefer this, because clearer what's
actually happening.
2019-09-19 20:37:05 +02:00
wm4
a88b7bf0fc manpage: another comment on backward playback with hardware decoding 2019-09-19 20:37:05 +02:00
wm4
165799157d vd_lavc: add --hwdec-extra-frames option
Surprised this didn't exist before.
2019-09-19 20:37:05 +02:00
wm4
e8a051b3cb f_decoder_wrapper: reorganize, fix EDL/ordered chapters backward playback
Before this commit, there was a single process_decoded_frame() function.
It handled various aspects of dealing with a newly decoded frame. Move
some of these to a separate process_output_frame() function.

This new function is called in the order the frames are returned to the
playback core. Some correct_audio_pts() (was process_audio_frame())
becomes slightly less awkward due to this, and the timestamp smoothing
can actually work in backward playback mode now (thus moving p->pts out
of reset_decoder()).

Behavior for normal playback also changes subtly. This shouldn't matter
in sane cases, but if you mix broken files, --no-correct-pts, and
timeline stuff, differences in behavior might be visible.

Timeline clipping (EDL/ordered chapters) works now, because it's done
before "transforming" the timestamps. Audio timestamp smoothing happens
after it, which is a behavior change, but should be more correct. This
still runs crazy_video_pts_stuff() before everything else. On the pther
hand, --no-correct-pts or missing timestamp processing is done last. But
these things didn't really work with timeline before.
2019-09-19 20:37:05 +02:00
wm4
0c5df2965e options: rename --play-direction to --play-dir
And add simpler aliases for the modes.

I'm not sure how to name things, and the option list is in general full
of different conventions. Some names are shortened, some are explicit
and long.

I guess options that have a chance to be used normally (i.e. not obscure
tuning or debugging) should have a short and convenient names.

In this specific case, play-direction is like a mixture of both. It
should be either playback-direction or play-dir, not shorten one word
but not the other.

The convenience aliases are because I got sick of typing out "backward".
I guess "back" would also do it, but there's no proper antonym (and
maybe it's "wrong" in the strict sense of the word).
2019-09-19 20:37:05 +02:00
wm4
8812530b31 demux: more backwards playback preroll packets for vorbis and mp3
Together with the previous commit, this seems to make backward playback
work in files with vorbis and mp3 audio codecs.

For Vorbis (with libavcodec's decoder, didn't test libvorbis), the first
packet was just always completely discarded. This happened even though
we tell libavcodec that we do discarding of padding manually. It simply
happened inside the codec, not libavcodec's general initial padding
handling. In addition, the first output decoded frame seems to contain
partial data. (Unlike the opus decoder, it doesn't report any padding at
all.)

The Opus decoder (again libavcodec only tested) reports an initial
padding, but it appears to be too small, and it sounds right only with 2
packets discarded. So its status doesn't change.

I'm not sure why I need 2 frames for mp3, but with that value I had
success on the samples I tested.
2019-09-19 20:37:05 +02:00
wm4
7d3bdb91da manpage: document accidental feature/bug
Clarify existing semantics for the --start/--end/--length options.
De-emphasize the difference between absolute and relative timestamps,
since they've not been different by default since mpv 0.14.

Document a bug, that also happens to work as a feature: if the option
value begins with spaces, the code for checking for relative timestamps
is inactive, and they're always considered absolute. The check is done
on the first character of the string - so even a negative timestamp will
be treated as absolute.)

Yes, this is useful in extremely rare situations, such as when you
really want send a specific timestamp (even a negative one) to the
demuxer.
2019-09-19 20:37:05 +02:00
wm4
7a0f112a44 player: modify/simplify AB-loop behavior
This changes the behavior of the --ab-loop-a/b options. In addition, it
makes it work with backward playback mode.

The most obvious change is that the both the A and B point need to be
set now before any looping happens. Unlike before, unset points don't
implicitly use the start or end of the file. I think the old behavior
was a feature that was explicitly added/wanted. Well, it's gone now.

This is because of 2 reasons:

1. I never liked this feature, and it always got in my way (as user).
2. It's inherently annoying with backward playback mode.

In backward playback mode, the user wants to set A/B in the wrong order.
The ab-loop command will first set A, then B, so if you use this command
during backward playback, A will be set to a higher timestamps than B.
If you switch back to forward playback mode, the loop would stop
working. I want the loop to just continue to work, and the chosen
solution conflicts with the removed feature.

The order issue above _could_ be fixed by also switching the AB-loop
user option values around on direction switch. But there are no other
instances of option changes magically affecting other options, and doing
this would probably lead to unexpected misery (dying from corner cases
and such).

Another solution is sorting the A/B points by timestamps after copying
them from the user options. Then A/B options set in backward mode will
work in forward mode. This is the chosen solution. If you sort the
points, you don't know anymore whether the unset point is supposed to
signify the end or the start of the file.

The AB-loop code is slightly better abstracted now, so it should be easy
to restore the removed feature. It would still require coming up with a
solution for backwards playback, though.

A minor change is that if one point is set and the other is unset, I'm
rendering both the chapter markers and the marker for the set point.
Why? I don't know. My test file had chapters, and I guess I decided this
looked better.

This commit also fixes some subtle and obvious issues that I already
forgot about when I wrote this commit message. It cleans up some minor
code duplication and nonsense too.

Regarding backward playback, the code uses an unsanitary mix of internal
("transformed") and user timestamps. So the play_dir variable appears
more than usual.

To mention one unfixed issue: if you set an AB-loop that is completely
past the end of the file, it will get stuck in an infinite seeking loop
once playback reaches the end of the file. Fixing this reliably seemed
annoying, so the fix is "just don't do this". It's not a hard freeze
anyway.
2019-09-19 20:37:05 +02:00
wm4
900a9624f9 options: remove --chapter
Has been deprecated for almost 3 years. Manpage didn't mention the
deprecation, but CLI and release notes did. It wouldn't be much effort
to keep this option working, but I just don't see the damn point.

--start/--end can specify chapters using special syntax, which is
equivalent.
2019-09-19 20:37:05 +02:00
wm4
204a7725de demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.

libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal,  will log the message
"Demuxer not cooperating.", and then typically stop doing anything.

Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.

libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.

The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.

Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.

Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.

The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.

We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.

This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.

An earlier commit fixed the same issue for mpv's demux_raw.

An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.

Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-09-19 20:37:04 +02:00
wm4
a84c4de31f manpage: deinterlacing with backwards playback probably works
As well as other filtering. I was writing this with the assumption that
timestamps go backwards (which I first planned to do). But in fact,
timestamps go forward, frame durations are positive, and adding a frame
duration to a timestamp yields the correct result. The only strange
thing is that timestamps are negative.

Also, media of course goes backwards. In other possible implementation,
filters would see normal forward playback, interrupted by seeks or
discontinuities. It turns out the current implementation of providing a
continuous backward media stream is probably better for filters.

Even deinterlacing seems to work. libavcodec always outputs fields in as
interleaved frames (i.e. fields are not reversed), and making up
timestamps for the new frames (when doubling the framerate) works
exactly like like in the forward case.

Actually the previous paragraph was a lie, and libavcodec does not
output fields as interleaved frames in rare cases. Sometimes AVFrame
contains single fields. In this case you'd need to inverse the field
dominance for deinterlacing filters to work correctly.
2019-09-19 20:37:04 +02:00
wm4
b04a761ce4 manpage: backward encoding actually appears to work
The way backward playback is implemented doesn't break basic assumptions
about timestamps after the decoder, so I guess all the encoding mode
needs to do is to adjust for the start offset, which it already does.

Though I might be wrong and my test was possibly flawed.

Stream recording on the other hand will fail immediately with
--record-file, and --stream-record will probably yield unexpected
results if any backstep seeks are done.
2019-09-19 20:37:04 +02:00
wm4
f53f9b89b1 demux: add a special case for backward demuxing of opus
Make --audio-backward-overlap default to 2 for Opus. I have no idea why
this is needed. It seems to fix backward decoding though (going purely
by listening).

Normally, this should not be needed, since initial padding is completely
contained within the first packet (normally, and in the case I tested).
So the 2nd packet/frame should be fine, but for some unknown reason it
works only with the 3rd.
2019-09-19 20:37:04 +02:00
wm4
6d11668a9c demux: use no overlapping packets for lossless audio
Worthless optimization, but at least it justifies that the
--audio-backward-overlap option has an "auto" choice. Tested with PCM
and FLAC.
2019-09-19 20:37:04 +02:00
wm4
327f3fc848 manpage: document why Vorbis backward playback does not work
The only reasonable solution to this is probably to make discarding of
preroll frames based on timestmaps, instead of frame/packet count. But
then you get issues with video and its dumb timestamp reordering. So for
now, fuck it.
2019-09-19 20:37:04 +02:00
wm4
085c7106b9 demux: change backward-overlap to keyframe ranges instead of packets
This seems more useful in general. This change also happens to fix a
miscounting of preroll packets when some of them were "rounded" away,
and which could make it stuck.

Also a simple intra-refresh encode with x264 (and muxed to mkv by it)
seems to work now. I guess I misinterpreted earlier results.
2019-09-19 20:37:04 +02:00
wm4
a3ac2019ed demux: fix initial backward demuxing state in some cases
Just "mpv file.mkv --play-direction=backward" did not work, because
backward demuxing from the very end was not implemented. This is another
corner case, because the resume mechanism so far requires a packet
"position" (dts or pos) as reference. Now "EOF" is another possible
reference.

Also, the backstep mechanism could cause streams to find different
playback start positions, basically leading to random playback start
(instead of what you specified with --start). This happens only if
backstep seeks are involved (i.e. no cached data yet), but since this is
usually the case at playback start, it always happened. It was racy too,
because it depended on the order the decoders on other threads requested
new data. The comment below "resume_earlier" has some more blabla.

Some other details are changed.

I'm giving up on the "from_cache" parameter, and don't try to detect the
situation when the demuxer does not seek properly. Instead, always seek
back, hopefully some more.

Instead of trying to adjust the backstep seek target by a random value
of 1.0 seconds. Instead, always rely on the random value provided by the
user via --demuxer-backward-playback-step. If the demuxer should really
get "stuck" and somehow miss the seek target badly, or the user sets the
option value to 0, then the demuxer will not make any progress and just
eat CPU. (Although due to backward seek semantics used for backstep
seeks, even a very small seek step size will work. Just not 0.)

It seems this also fixes backstepping correctly when the initial seek
ended at the last keyframe range. (The explanation above was about the
case when it ends at EOF. These two cases are different. In the former,
you just need to step to the previous keyframe range, which was broken
because it didn't always react correctly to reaching EOF. In the latter,
you need to do a separate search for the last keyframe.)
2019-09-19 20:37:04 +02:00
wm4
27c5550de2 sd_lavc: implement --sub-pos for bitmap subtitles
Simple enough to do. May have mixed results. Typically, bitmap subtitles
will have a tight bounding box around the rendered text. But if for
example there is text on the top and bottom, it may be a single big
bitmap with a large transparent area between top and bottom. In
particular, DVD subtitles are really just a single screen-sized
RLE-encoded bitmap, though libavcodec will crop off transparent areas.

Like with sd_ass, you can't move subtitles _down_ if they are already in
their origin position. This could probably be improved, but I don't want
to deal with that right now.
2019-09-19 20:37:04 +02:00
wm4
6f7260d29c manpage: document that backward playback from the end does not work
Not specifying a --start or using --start=100% with
--play-direction=backward usually does not work. The demuxer gets no
packets and immediately enters EOF state, which then hangs because
backward playback mode neither considers this mode, nor propagates the
EOF.

As far as demuxer implementations are concerned, this behavior is OK and
even wanted. Seeking near the end with SEEK_FORWARD set is allowed not
to return any packets (so a normal relative forward seek as done by the
user would end playback). Seeking exactly to the end or past it without
SEEK_FORWARD set is probably also sane.

Another vaguely related issue is that a backward seek during playback
start does not "establish" the demux position correctly: if stream A
hits the next keyframe and seeks back, while stream B has not had a
chance to read a packet yet, then stream B will never try to read from
the old position. The effect is that stream B (and thus playback) will
effectively miss the seek target. This is "random" because it depends on
the order and number of packet read calls made by the decoders.

Fixing this is probably hard, and requires extending the already complex
state machine with more states, so turn the manpage into a TODO list for
now.
2019-09-19 20:37:04 +02:00
wm4
5b4ae42328 demux_raw: fix operation with demuxer cache and backward playback
Raw audio formats can be accessed sample-wise, and logically audio
packets demuxed from it would contain only 1 sample. This is
inefficient, so raw audio demuxers typically "bundle" multiple samples
in one packet.

The problem for the demuxer cache and backward playback is that they
need properly aligned packets to make seeking "deterministic". The
requirement is that if you read some packets, and then seek back, you
eventually see the same packets again. demux_raw basically allowed to
seek into the middle of a previously returned packet, which makes it
impossible to make the transition seamless. (Unless you'd be aware of
the packet data format and cut them to make it seamless, which is too
complex for such a use case.)

Solve this by always aligning seeks to packet boundaries. This reduces
the seek accuracy to the arbitrarily chosen packet size. But you can use
hr-seek to fix this. The gain from not making raw audio an awful special
case pays in exchange for this "stupid" suggestion to use hr-seek.

It appears this also fixes that it could and did seek into the middle of
the frame (not sure if this code was ever tested - it goes back to
removing the code duplication between the former demux_rawaudio.c and
demux_rawvideo.c).

If you really cared, you could introduce a seek flag that controls
whether the seek is aligned or not. Then code which requires
"deterministic" demuxing could set it. But this isn't really useful for
us, and we'd always set the flag anyway, unless maybe the caching were
forced disabled.

libavformat's wav demuxer exhibits the same issue. We can't fix it (it
would require the unpleasant experience of contributing to FFmpeg), so
document this in otions.rst. In theory, this also affects seek range
joining, but the only bad effect should be that cached data is
discarded.
2019-09-19 20:37:04 +02:00
wm4
b9d351f02a Implement backwards playback
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)

(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)

How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.

The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).

Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).

The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.

Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.

E.g.:

    bool before = pts_a < pts_b;

would need to be:

    bool before = forward
        ? pts_a < pts_b
        : pts_a > pts_b;

or:

    bool before = pts_a * dir < pts_b * dir;

or if you, as it's implemented now, just do this after decoding:

    pts_a *= dir;
    pts_b *= dir;

and then in the normal timing/renderer code:

    bool before = pts_a < pts_b;

Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.

Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.

As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)

VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.

FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
2019-09-19 20:37:04 +02:00
wm4
556e204a11 player: add --demuxer-cache-wait option 2019-09-19 20:37:04 +02:00
wm4
d2ef2f98a8 loadfile, ytdl_hook: don't reject EDL-resolved URLs through playlist
The ytdl wrapper can resolve web links to playlists. This playlist is
passed as big memory:// blob, and will contain further quite normal web
links. When playback of one of these playlist entries starts, ytdl is
called again and will resolve the web link to a media URL again.

This didn't work if playlist entries resolved to EDL URLs. Playback was
rejected with a "potentially unsafe URL from playlist" error. This was
completely weird and unexpected: using the playlist entry directly on
the command line worked fine, and there isn't a reason why it should be
different for a playlist entry (both are resolved by the ytdl wrapper
anyway). Also, if the only EDL URL was added via audio-add or sub-add,
the URL was accessed successfully.

The reason this happened is because the playlist entries were marked as
STREAM_SAFE_ONLY, and edl:// is not marked as "safe". Playlist entries
passed via command line directly are not marked, so resolving them to
EDL worked.

Fix this by making the ytdl hook set load-unsafe-playlists while the
playlist is parsed. (After the playlist is parsed, and before the first
playlist entry is played, file-local options are reset again.) Further,
extend the load-unsafe-playlists option so that the playlist entries are
not marked while the playlist is loaded.

Since playlist entries are already verified, this should change nothing
about the actual security situation.

There are now 2 locations which check load_unsafe_playlists. The old one
is a bit redundant now. In theory, the playlist loading code might not
be the only code which sets these flags, so keeping the old code is
somewhat justified (and in any case it doesn't hurt to keep it).

In general, the security concept sucks (and always did). I can for
example not answer the question whether you can "break" this mechanism
with various combinations of archives, EDL files, playlists files,
compromised sites, and so on. You probably can, and I'm fully aware that
it's probably possible, so don't blame me.
2019-09-19 20:37:04 +02:00
wm4
0abe34ed21 vo_gpu: x11: remove special vdpau probing, use EGL by default
Originally, vo_gpu/vo_opengl considered the case of Nvidia proprietary
drivers, which required vdpau/GLX, and Intel open source drivers, which
require vaapi/EGL. Since window creation and GPU context creation are
inseparable in mpv's internal API, it had to pick the correct API very
early, or hardware decoding wouldn't work. "x11probe" was introduced for
this reason. It created a GLX context (without showing the window yet),
and checked whether vdpau was available. If yes, it used GLX, if not, it
continued probing x11/EGL. (Obviously it couldn't always fail on GLX
without vdpau, which is why it was a separate "probe" backend.)

Years passed, and now the situation is different. Vdpau is dead. Nvidia
drivers and libavcodec now provide CUDA interop, which requires EGL, and
fixes some of the vdpau problems. AMD drivers now provide vaapi, which
generally works better than vdpau. Intel didn't change.

In particular, vaapi provides working HEVC Main10 support. In theory, it
should work on vdpau too, with quality reduction (no 10 bit surfaces),
but I couldn't get it to work.

So always prefer EGL. And suddenly hardware decoding works. This is
actually rather important, because HEVC is unfortunately on the rise,
despite shitty encoders and unoptimized decoders. The latter may mean
that hardware decoding works better than libavcodec.

This should have been done a long, long time ago.
2019-09-15 20:00:52 +03:00
sfan5
ee0f4444f9 image_writer: add webp-compression option 2019-09-14 23:02:39 +02:00
sfan5
0f79444c6d image_writer: add WebP support (lossy or lossless) 2019-09-14 23:02:39 +02:00
Niklas Haas
7cf288ec77 DOCS: remove references to --video-stereo-mode
This option was removed by a5610b2a but the documentation persisted.
Also adds an OPT_REMOVED.

Closes #6938.
2019-09-14 21:16:38 +02:00
wm4
b30e85508a Remove classic Linux analog TV support, and DVB runtime controls
Linux analog TV support (via tv://) was excessively complex, and
whenever I attempted to use it (cameras or loopback devices), it didn't
work well, or would have required some major work to update it. It's
very much stuck in the analog past (my favorite are the frequency tables
in frequencies.c for analog TV channels which don't exist anymore).

Especially cameras and such work fine with libavdevice and better than
tv://, for example:

  mpv av://v4l2:/dev/video0

(adding --profile=low-latency --untimed even makes it mostly realtime)

Adding a new input layer that targets such "modern" uses would be
acceptable, if anyone is interested in it. The old TV code is just too
focused on actual analog TV.

DVB is rather obscure, but has an active maintainer, so don't remove it.
However, the demux/stream ctrl layer must go, so remove controls for
channel switching. Most of these could be reimplemented by using the
normal method for option runtime changes.
2019-09-13 17:32:19 +02:00
Bin Jin
ca2f193671 vo_gpu: implement error diffusion for dithering
This is a straightforward parallel implementation of error diffusion
algorithms in compute shader. Basically we use single work group with
maximal possible size to process the whole image. After a shift
mapping we are able to process all pixels column by column.

A large ring buffer are allocated in shared memory to speed things up.
However the size of required shared memory depends linearly on the
height of video window (or screen height in fullscreen mode). In case
there is no enough shared memory, it will fallback to `--dither=fruit`.

The maximal allowed work group size is hardcoded as 1024. Ideally we
could query `GL_MAX_COMPUTE_WORK_GROUP_INVOCATIONS`. But for whatever
reason, it seems most high end card from nvidia and amd support only
the minimal required value, so I guess we can stick to it for now.
2019-06-16 11:19:44 +02:00
Bin Jin
ae1c489b31 vo_gpu: allow user shader to fix texture offset
This commit essentially makes user shader able to fix offset (produced
by other prescaler, for example) like builtin `--scale`.
2019-06-06 20:01:56 +02:00
der richter
90e44d3ff2 cocoa-cb: add support for custom colored title bar 2019-04-02 02:09:01 +03:00
der richter
837e5058ff cocoa-cb: refactor title bar styling
half of the materials we used were deprecated with macOS 10.14, broken
and not supported by run time changes of the macOS theme. furthermore
our styling names were completely inconsistent with the actually look
since macOS 10.14, eg ultradark got a lot brighter and couldn't be
considered ultradark anymore.

i decided to drop the old option --macos-title-bar-style and rework
the whole mechanism to allow more freedom. now materials and appearance
can be set separately. even if apple changes the look or semantics in
the future the new options can be easily adapted.
2019-04-02 02:09:01 +03:00
Jan Ekström
199aabddcc Merge branch 'master' into pr6360
Manual changes done:
  * Merged the interface-changes under the already master'd changes.
  * Moved the hwdec-related option changes to video/decode/vd_lavc.c.
2019-03-11 01:00:27 +02:00
zc62
e37c253b92 lcms: allow infinite contrast
Fixes #5980
2019-03-09 12:55:44 +01:00
Martin Herkt
8f5a42b1a0
options: do not enable WMV3 hwdec by default
Crashes NVIDIA, probably buggy on others. No one ever tests this shit.

See #2192
2019-03-01 12:44:45 +01:00
Niklas Haas
3f1bc25d4d vo_gpu: use dB units for scene change detection
Rather than the linear cd/m^2 units, these (relative) logarithmic units
lend themselves much better to actually detecting scene changes,
especially since the scene averaging was changed to also work
logarithmically.
2019-02-18 01:54:06 +02:00
Niklas Haas
12e58ff8a6 vo_gpu: allow boosting dark scenes when tone mapping
In theory our "eye adaptation" algorithm works in both ways, both
darkening bright scenes and brightening dark scenes. But I've always
just prevented the latter with a hard clamp, since I wanted to avoid
blowing up dark scenes into looking funny (and full of noise).

But allowing a tiny bit of over-exposure might be a good thing. I won't
change the default just yet (better let users test), but a moderate
value of 1.2 might be better than the current 1.0 limit. Needs testing
especially on dark scenes.
2019-02-18 01:54:06 +02:00
Niklas Haas
6179dcbb79 vo_gpu: redesign peak detection algorithm
The previous approach of using an FIR with tunable hard threshold for
scene changes had several problems:

- the FIR involved annoying hard-coded buffer sizes, high VRAM usage,
  and the FIR sum was prone to numerical overflow which limited the
  number of frames we could average over. We also totally redesign the
  scene change detection.

- the hard scene change detection was prone to both false positives and
  false negatives, each with their own (annoying) issues.

Scrap this entirely and switch to a dual approach of using a simple
single-pole IIR low pass filter to smooth out noise, while using a
softer scene change curve (with tunable low and high thresholds), based
on `smoothstep`. The IIR filter is extremely simple in its
implementation and has an arbitrarily user-tunable cutoff frequency,
while the smoothstep-based scene change curve provides a good, tunable
tradeoff between adaptation speed and stability - without exhibiting
either of the traditional issues associated with the hard cutoff.

Another way to think about the new options is that the "low threshold"
provides a margin of error within which we don't care about small
fluctuations in the scene (which will therefore be smoothed out by the
IIR filter).
2019-02-18 01:54:06 +02:00
Niklas Haas
3fe882d4ae vo_gpu: improve tone mapping desaturation
Instead of desaturating towards luma, we desaturate towards the
per-channel tone mapped version. This essentially proves a smooth
roll-off towards the "hollywood"-style (non-chromatic) tone mapping
algorithm, which works better for bright content, while continuing to
use the "linear" style (chromatic) tone mapping algorithm for primarily
in-gamut content.

We also split up the desaturation algorithm into strength and exponent,
which allows users to use less aggressive desaturation settings without
affecting the overall curve.
2019-02-18 01:54:06 +02:00
Martin Herkt
3dd59dbed0
options: do not enable MPEG2 hwdec by default
Too many broken hardware decoders. Noticed wrong decoding of a video
file encoded with x262 on RX Vega when using VAAPI (Mesa 18.3.2).
Looks fine with swdec and a cheap hardware BD player.

Reverts 017f3d0674
2019-02-13 02:43:57 +01:00
Kotori Itsuka
94d35627f5 DOCS/options.rst: update target-peak description
List auto as an option for target-peak, and state that auto is its
default operation.
2019-01-23 09:31:35 +01:00
Benjamin Barenblat
c681fc133c DOCS/man: update man pages to describe ReplayGain fallback
Describe ReplayGain album-to-track fallback behavior introduced in
commits e392d6610d and
be90f2c8dd.
2019-01-16 16:58:33 +01:00
Oliver Freyermuth
d6d6da4711 stream_dvb: Correct range for dvbin-card option.
Adapt documentation accordingly and
also, fix an off-by-one check in the code.
closes #6371
2018-12-12 01:50:43 +02:00
wm4
9d8afcf79e demux: add another stream recording feature
--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
2018-12-06 10:31:10 +01:00
Anton Kindestam
8b83c89966 Merge commit '559a400ac36e75a8d73ba263fd7fa6736df1c2da' into wm4-commits--merge-edition
This bumps libmpv version to 1.103
2018-12-05 19:19:24 +01:00
Niklas Haas
5bcac8580d spirv: remove --spirv-compiler=nvidia
This option has been deprecated upstream for a long time, probably
doesn't even work anymore, and won't work moving forwards as we replace
the vulkan code by libplacebo wrappers.

I haven't removed the option completely yet since in theory we could
still add support for e.g. a native glslang wrapper in the future. But
most likely the future of this code is deletion.

As an aside, fix an issue where the man page didn't mention d3d11.
2018-12-01 15:50:23 +02:00
TheAMM
b6a431ec55 man: fix --watch-later-directory formatting
Extra line prevents the sub-title formatting.
Removing it, the option is formatted like the others.
2018-11-28 18:02:45 +01:00
Philip Langdale
da1073c247 vo_gpu: vulkan: hwdec_cuda: Add support for Vulkan interop
Despite their place in the tree, hwdecs can be loaded and used just
fine by the vulkan GPU backend.

In this change we add Vulkan interop support to the cuda/nvdec hwdec.

The overall process is mostly straight forward, so the main observation
here is that I had to implement it using an intermediate Vulkan buffer
because the direct VkImage usage is blocked by a bug in the nvidia
driver. When that gets fixed, I will revist this.

Nevertheless, the intermediate buffer copy is very cheap as it's all
device memory from start to finish. Overall CPU utilisiation is pretty
much the same as with the OpenGL GPU backend.

Note that we cannot use a single intermediate buffer - rather there
is a pool of them. This is done because the cuda memcpys are not
explicitly synchronised with the texture uploads.

In the basic case, this doesn't matter because the hwdec is not
asked to map and copy the next frame until after the previous one
is rendered. In the interpolation case, we need extra future frames
available immediately, so we'll be asked to map/copy those frames
and vulkan will be asked to render them. So far, harmless right? No.

All the vulkan rendering, including the upload steps, are batched
together and end up running very asynchronously from the CUDA copies.

The end result is that all the copies happen one after another, and
only then do the uploads happen, which means all textures are uploaded
the same, final, frame data. Whoops. Unsurprisingly this results in
the jerky motion because every 3/4 frames are identical.

The buffer pool ensures that we do not overwrite a buffer that is
still waiting to be uploaded. The ra_buf_pool implementation
automatically checks if existing buffers are available for use and
only creates a new one if it really has to. It's hard to say for sure
what the maximum number of buffers might be but we believe it won't
be so large as to make this strategy unusable. The highest I've seen
is 12 when using interpolation with tscale=bicubic.

A future optimisation here is to synchronise the CUDA copies with
respect to the vulkan uploads. This can be done with shared semaphores
that would ensure the copy of the second frames only happens after the
upload of the first frame, and so on. This isn't trivial to implement
as I'd have to first adjust the hwdec code to use asynchronous cuda;
without that, there's no way to use the semaphore for synchronisation.
This should result in fewer intermediate buffers being required.
2018-10-22 21:35:48 +02:00
Niklas Haas
7ad60a7c5e vo_gpu: split --linear-scaling into two separate options
Since linear downscaling makes sense to handle independently from
linear/sigmoid upscaling, we split this option up. Now,
linear-downscaling is its own option that only controls linearization
when downscaling and nothing more. Likewise, linear-upscaling /
sigmoid-upscaling are two mutually exclusive options (the latter
overriding the former) that apply only to upscaling and no longer
implicitly enable linear light downscaling as well.

The old behavior was very confusing, as evidenced by issues such
as #6213. The current behavior should make much more sense, and only
minimally breaks backwards compatibility (since using linear-scaling
directly was very uncommon - most users got this for free as part of
gpu-hq and relied only on that).

Closes #6213.
2018-10-19 22:58:01 +02:00
Akemi
8d2d0f0640 cocoa-cb: add Apple Software Renderer support
by default the pixel format creation falls back to software renderer
when everything fails. this is mostly needed for VMs. additionally one
can directly request an sw renderer or exclude it entirely.
2018-09-30 17:13:34 +03:00
Ricardo Constantino
9c184078a6
man/options: emphasize ytdl_hook's script options 2018-09-26 22:25:06 +01:00
wm4
559a400ac3 demux, stream: rip out the classic stream cache
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
2018-08-31 12:55:22 +02:00
Anton Kindestam
d2d7dba6ee manpage: fix reference to --tone-mapping by old option name 2018-08-18 20:32:41 +02:00
Jan Ekström
1a893e8257 gpu: prefer 16bit floating point FBO formats to 16bit integer ones
According to earlier discussions, this can improve visual quality.
This only changes the preferred order of the formats, not the
formats themselves.
2018-07-08 16:49:23 +03:00
wm4
31bce1cbe7 demux_lavf: drop obscure genpts option
This code shouldn't even exist in libavformat. If you still need it, you
can enable it via --demuxer-lavf-o.
2018-05-31 01:24:51 +03:00
wm4
ca97239cb6 options: add --http-proxy
Often requested, trivial.
2018-05-31 01:24:51 +03:00
wm4
3ca9598d5c manpage: update --demuxer-thread option
Be a bit more detailed, and discourage disabling it.
2018-05-31 01:24:51 +03:00
wm4
fba98cfb05 manpage: remove a reference to a removed option 2018-05-25 10:47:23 +02:00
wm4
1d46368404 manpage: mention that --no-correct-pts can break seeking too 2018-05-25 10:45:46 +02:00
wm4
982416266c demux_lavf: drop obscure genpts option
This code shouldn't even exist in libavformat. If you still need it, you
can enable it via --demuxer-lavf-o.
2018-05-24 19:56:35 +02:00
wm4
b2e24f42d5 options: add --http-proxy
Often requested, trivial.
2018-05-24 19:56:35 +02:00
wm4
dbcd654e61 player: make playback termination asynchronous
Until now, stopping playback aborted the demuxer and I/O layer violently
by signaling mp_cancel (bound to libavformat's AVIOInterruptCB
mechanism). Change it to try closing them gracefully.

The main purpose is to silence those libavformat errors that happen when
you request termination. Most of libavformat barely cares about the
termination mechanism (AVIOInterruptCB), and essentially it's like the
network connection is abruptly severed, or file I/O suddenly returns I/O
errors. There were issues with dumb TLS warnings, parsers complaining
about incomplete data, and some special protocols that require server
communication to gracefully disconnect.

We still want to abort it forcefully if it refuses to terminate on its
own, so a timeout is required. Users can set the timeout to 0, which
should give them the old behavior.

This also removes the old mechanism that treats certain commands (like
"quit") specially, and tries to terminate the demuxers even if the core
is currently frozen. This is for situations where the core synchronized
to the demuxer or stream layer while network is unresponsive. This in
turn can only happen due to the "program" or "cache-size" properties in
the current code (see one of the previous commits). Also, the old
mechanism doesn't fit particularly well with the new one. We wouldn't
want to abort playback immediately on a "quit" command - the new code is
all about giving it a chance to end it gracefully. We'd need some sort
of watchdog thread or something equally complicated to handle this. So
just remove it.

The change in osd.c is to prevent that it clears the status line while
waiting for termination. The normal status line code doesn't output
anything useful at this point, and the code path taken clears it, both
of which is an annoying behavior change, so just let it show the old
one.
2018-05-24 19:56:35 +02:00
wm4
dee84be222 manpage: update --demuxer-thread option
Be a bit more detailed, and discourage disabling it.
2018-05-24 19:56:35 +02:00
wm4
5f61892c42 manpage: remove a reference to a removed option 2018-05-24 19:56:34 +02:00
wm4
fb62ffdb94 manpage: mention that --no-correct-pts can break seeking too 2018-05-24 19:56:34 +02:00
Niklas Haas
05b392bc94 vo_gpu: allow higher icc-contrast and improve logging
With the advent of actual HDR devices, my real measured ICC profile has
an "infinite" contrast, since the display is completely off on pure
black inputs. 100k:1 might not be enough, so let's just bump it up to
1m:1 to be safe.

Also, improve the logging in the case that the detected contrast is too
high by default.
2018-05-17 22:56:45 +03:00
Niklas Haas
c0eea89b4d manpage: fix typo 2018-05-17 13:19:25 +02:00
Niklas Haas
37ec321287 manpage: clarify target-prim/trc=auto behavior
This logic has been changed throughout the years, notably in 38ac5d5 and
3bdbf6. Update the documentation to reflect the current state.

Closes #5834.
2018-05-17 13:19:25 +02:00
wm4
b18399befe manpage: remove 4 previously removed options
The manpage parts were forgotten when removing the options.
2018-05-01 18:36:15 +03:00
wm4
a79327189e manpage: --demuxer-seekable-cache is not experimental anymore
This seems to work surprisingly well, and it's enabled by default
(unlike the old text claims).
2018-05-01 00:25:27 +03:00
wm4
7dd69ef77c command: change cycle-value command behavior
Instead of using an internal counter to keep track of the value that was
set last, attempt to find the current value of the property/option in
the value list, and then set the next value in the list.

There are some potential problems. If a property refuses to accept a
specific value, the cycle-values command will fail, and start from the
same position again. It can't know that it's supposed to skip the next
value. The same can happen to properties which behave "strangely", such
as the "aspect" property, which will return the current aspect if you
write "-1" to it. As a consequence, cycle-values can appear to get
"stuck".

I still think the new behavior is what users expect more, and which is
generally more useful. We won't restore the ability to get the old
behavior, unless we decide to revert this commit entirely.

Fixes #5772, and hopefully other complaints.
2018-04-29 02:21:32 +03:00
wm4
d6af6efbf9 vd_lavc: enable dr by default
I had this enabled for quite a while and experienced no issues. I'm not
aware of other issues either.
2018-04-29 02:21:32 +03:00
wm4
4e7cbb7606 audio: don't recreate AO if a filter changes the output format
Until recently, the AO was reinitialized strictly only on decoder format
changes. But the commit for simplifying audio format negotiation removed
this. Now the AO is recreated for any format change.

This is sort of annoying if you change playback speed. The
insertion/removal of af_scaletempo can change the sample format. For
example, the acompressor filter will convert output to double, so
toggling scaletempo will force the format back to float. This recreates
the AO under the --gapless-audio=weak default. This likely affects a lot
of other filters too.

Work this around by allowing sample format changes, and keeping the
current AO format in these cases. This is probably not a big problem.
Most audio APIs force the output format to float anyway.

This means you actually have to worry about what the default gapless
mode does to your audio. If you start with a file that uses 8 bit per
sample, and then continue playing a 24 bit FLAC, it will be converted
down to 8 bit per sample. (Assuming they are played in a way that uses
the gapless logic.)
2018-04-15 23:11:33 +03:00
wm4
401bd57d44 ao_alsa: add options for controlling period/buffer size 2018-04-15 23:11:33 +03:00
Kevin Mitchell
cacb0ad3dc manpage: document vaapi-device
This was left out of e3e2c79 by mistake.
2018-04-08 22:24:04 +03:00
Kevin Mitchell
576dabf654 manpage: move cuda-decode-device with hwdec options 2018-04-08 22:24:04 +03:00
wm4
4655923d38 manpage: mention how to get multiple video tracks for --lavfi-complex
See #5670.
2018-03-26 19:47:08 +02:00
wm4
bfb3a78964 manpage: document that ---ao overrides --audio-device
Fixes #5640.
2018-03-15 23:13:53 -07:00
wm4
2c572e2bb1 video: add an option to tune waiting for video timing
Probably mostly useful for the libmpv render API.
2018-03-15 23:13:53 -07:00
Ricardo Constantino
38e5b141c6
DOCS/options: clarify that --end also supports relative time 2018-03-15 14:20:12 +00:00
wm4
775b86212d video: add option to reduce latency by 1 or 2 frames
The playback start logic explicitly waits until the first frame has been
displayed. Usually this will introduce a wait of 1 vsync. For normal
playback this doesn't matter, but with respect to low latency needs,
this only leads to additional data getting queued up in the demuxer or
network buffers.

Another thing is that the timing logic decodes 1 frame ahead (= 1 frame
extra latency) to determine the exact duration of a frame.

To be fair, there doesn't really seem to be a hard reason why this is
needed. With the current code, enabling the option does lead to A/V
desync sometimes (if the demuxer FPS is too inaccurate), and also frame
drops at playback start in some situations. But this all seems to be
avoidable, if the timing logic were to be rewritten completely, which
should probably happen in the future. Thus the new option comes with the
warning that it can be removed any time. This is also why the option has
"hack" in the name.
2018-03-03 02:38:01 +02:00
wm4
16eca7139a demux_lavf: add --demuxer-lavf-probe-info=nostreams
Another attempt to try to make it behave in certain situations.
2018-03-03 02:38:01 +02:00
wm4
b037121430 client API: deprecate opengl-cb API and introduce a replacement API
The purpose of the new API is to make it useable with other APIs than
OpenGL, especially D3D11 and vulkan. In theory it's now possible to
support other vo_gpu backends, as well as backends that don't use the
vo_gpu code at all.

This also aims to get rid of the dumb mpv_get_sub_api() function. The
life cycle of the new mpv_render_context is a bit different from
mpv_opengl_cb_context, and you explicitly create/destroy the new
context, instead of calling init/uninit on an object returned by
mpv_get_sub_api().

In other to make the render API generic, it's annoyingly EGL style, and
requires you to pass in API-specific objects to generic functions. This
is to avoid explicit objects like the internal ra API has, because that
sounds more complicated and annoying for an API that's supposed to never
change.

The opengl_cb API will continue to exist for a bit longer, but
internally there are already a few tradeoffs, like reduced
thread-safety.

Mostly untested. Seems to work fine with mpc-qt.
2018-02-28 00:55:06 -08:00
Akemi
aa974b2aa7 cocoa-cb: make fullscreen resize animation duration configurable 2018-02-28 00:48:44 -08:00
Akemi
938ad6ebc0 cocoa-cb: change border and borderless window styling
the title bar is now within the window bounds instead of outside. same
as QuickTime Player. it supports several standard styles, two dark and
two light ones. additionally we have properly rounded corners now and
the borderless window also has the proper window shadow.

Also make the earliest supported macOS version 10.10.

Fixes #4789, #3944
2018-02-28 00:48:44 -08:00
Niklas Haas
441e384390 vo_gpu: introduce --target-peak
This solves a number of problems simultaneously:

1. When outputting HLG, this allows tuning the OOTF based on the display
   characteristics.
2. When outputting PQ or other HDR curves, this allows soft-limiting the
   output brightness using the tone mapping algorithm.
3. When outputting SDR, this allows HDR-in-SDR style output, by
   controlling the output brightness directly.

Closes #5521
2018-02-20 22:02:51 +02:00
wm4
830f0aed97 video: make --deinterlace and HW deinterlace filters always deinterlace
Before this, we made deinterlacing dependent on the video codec metadata
(AVFrame.interlaced_frame for libavcodec). So even if --deinterlace=yes
was set, we skipped deinterlacing if the flag wasn't set. This is very
unreliable and there are many streams with flags incorrectly set.

The potential problem is that this might upset people who alwase enabled
deinterlace and hoped it worked. But it's likely these people were
screwed by this setting anyway. The new behavior is less tricky and
easier to understand, and this preferable. Maybe one day we could
introduce a --deinterlace=auto, which does the right thing, but of
course this would be hard to implement (esecially with hwdec).

Fixes #5219.
2018-02-13 17:45:29 -08:00
Akemi
c5e4538bc4 cocoa-cb: initial implementation via opengl-cb API
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.

- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations

all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.

this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.

this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.

some credit goes to @pigoz for the initial swift build support which
i could improve upon.

Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
2018-02-12 04:49:15 -08:00
Akemi
abf2efb107 osx: always deactivate the early opengl flush on macOS
early flushing only caused problems on macOS, which includes:
- performance problems and huge amount of dropped frames
- problems with playing back video files with fps close to the display
refresh rate
- rendering at twice the rate of the video fps
- not properly detected display refresh rate

we always deactivate any early flush for macOS to fix these problems.
2018-02-12 04:49:15 -08:00
Ricardo Constantino
57228b6581
ytdl_hook: add script opt for using manifest URLs
Disable by default.
This feature was added in 7eb342757, which allowed stream selection
in runtime. Problem with this atm is that FFmpeg will try to demux
every first packet of every track leading to noticeable delay opening
the URL.

This option can be changed to enabled by default or removed when
HLS/DASH demuxers are improved upstream.
2018-02-11 23:27:37 -08:00
wm4
9f595f3a80 vo_gpu: make screenshots use the GL renderer
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.

The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.

Will break for users who use --target-trc and similar options.

I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.

This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.

Fixes #5498. Also fixes #5240, because the code for reading back is not
used with the new code path.
2018-02-11 17:45:51 -08:00
Niklas Haas
e3d93fde2f vo_gpu: port HDR tone mapping algorithm from libplacebo
The current peak detection algorithm was very bugged (which contributed
to the excessive cross-frame flicker without long normalization) and
also didn't take into account the frame average brightness level.

The new algorithm both takes into account frame average brightness (in
addition to peak brightness), and also computes the values in a more
stable/correct way. (The old path was basically undefined behavior)

In addition to improving the algorithm, we also switch to hable tone
mapping by default, and try to enable peak computation automatically
whever possible (compute shaders + SSBOs supported). We also make the
desaturation milder, after extensive testing during libplacebo
development.

I also had to compensate a bit for the representational differences
between mpv and libplacebo (libplacebo treats 1.0 as the reference peak,
but mpv treats it as the nominal peak), but it shouldn't have caused any
problems.

This is still not quite the same as libplacebo, since libplacebo also
allows tagging the desired scene average brightness on the output, and
it also supports reading the scene average brightness from static
metadata (MaxFALL) where available. But those changes are a bit more
involved. It's possible we could also read this from metadata in the
future, but we have problems communicating with AVFrames as it is and I
don't want to touch the mpv colorimetry structs for the time being.
2018-02-05 23:11:18 -08:00
wm4
7019e0dcfe
swresample: limit output size of audio frames
Similar to the previous commit, and for the same reasons. Unlike with
af_scaletempo, resampling does not have a natural frame size, so we set
an arbitrary size limit on output frames. We add a new option to control
this size, although I'm not sure whether anyone will use it, so mark it
for testing only.

Note that we go through some effort to avoid buffering data in
libswresample itself. One reason is that we might have to reinitialize
the resampler completely when changing speed, which drops the buffered
data. Another is that I'm not sure whether the resampler will do the
right thing when applying dynamic speed changes.
2018-02-03 05:01:29 -08:00
Ricardo Constantino
eaa97daf65
ytdl_hook: pass http proxy to ffmpeg
FFmpeg only suppports http proxies and ignores it if
the resulting url is https. Also, no SOCKS.
Use it like `--ytdl-raw-options=proxy=[http://127.0.0.1:3128]` so
it doesn't confuse mpv because of the colons.

You need to pass it as an option because youtube-dl doesn't give
us the proxy.

Or just set `http_proxy` environment variable as recommended before.

Added example using -append, which doesn't need escaping.
2018-01-30 12:19:34 +00:00
Kevin Mitchell
3766024dcd command: add --osd-on-seek option defaulting to bar
Restores behaviour prior to aef2ed5dc1.

That change was apparently unpopular. However, given the amount of
complaining over how hard it is to change the defaults by rebinding every
key, I think the extra option introduced by this commit is justified.

Technically not all behaviour is restored, because now --no-osd-bar will
not instead display the msg text on seek. I think that feature was a
little weird and is now easy enough to remedy with the --osd-on-seek
option.
2018-01-26 21:50:38 -08:00
Kevin Mitchell
8c8dcc698b Revert "command: make pause display the same osd-msg-bar as seek"
This reverts commit 9812e276aa.

This was apparently unpopular. I still think the pause OSD should be the
same as seek even if it's not visible by default, but it seems that
whether to display a given property change is currently conflated with
what to display.

The reverted behaviour can be restored by adding something like the
following to input.conf:

SPACE cycle pause; show_progress
2018-01-26 21:50:38 -08:00
wm4
5441a12a1e manpage: mention --network-timeout is broken with RTSP
Not much we can do, too hard to work around.

Fixes #3361.
2018-01-25 20:18:32 -08:00
wm4
11f5713e3b options: add an option type for byte sizes
And use it for 2 demuxer options. It could be used for more options
later. (Though the --cache options can not use this, because they use KB
as base unit.)
2018-01-25 20:18:32 -08:00
Ricardo Constantino
28021feabb
manpage: document using sub-shadow-offset for background sizing 2018-01-24 19:40:13 +00:00
wm4
6d4b4c0de3 audio: add global options for resampler defaults
This is part of trying to get rid of --af-defaults, and the af
resample filter.

It requires a complicated mechanism to set the defaults on the resample
filter for backwards compatibility.
2018-01-13 03:26:45 -08:00
wm4
69d062ce37 client API: remove ytdl=no default
With the recent changes to the script it does not incur a startup delay
by default due to starting youtube-dl and waiting for it. This was the
main reason for making libmpv have a different default.

Starting sub processes from a library can still be a bit fishy, but I
think it's ok. Still mention it in the libmpv header. There were already
other cases where libmpv would start its own processes, such as the X11
backend calling xdg-screensaver. (The reason why this is fishy is
because UNIX process management sucks: SIGCHLD and the wait() syscall
make sub processes non-transparent and could potentially introduce
conflicts with code trying to use them.)
2018-01-13 03:26:45 -08:00
Kevin Mitchell
6e974f77bd command: make pause display the same osd-msg-bar as seek
Previously, toggling pause would generate no osd response, and changing
that wasn't even configurable. This was surprising to users who
generally expect to see *where* pause / unpause is taking place (#3028).
2018-01-07 16:07:04 -08:00
Kevin Mitchell
cd8daee3d3 command: default to osd-msg-bar for seeks
The previous default was osd-bar (unless the user specified
--no-osd-bar, in which case case it was osd-msg). Aside from requiring
some twisted logic to implement, this surprised users since osd-msg3
wasn't displayed when seeking with the keyboard (#3028), so the time
seeked to was never displayed.
2018-01-07 16:07:04 -08:00
Kevin Mitchell
57f43c35ec manpage: fix typos in osd level descriptions 2018-01-07 16:07:04 -08:00
Ricardo Constantino
87d3af6f19
ytdl_hook: add script option to revert to trying youtube-dl first
Should only make a difference if most of the URLs you open need
youtube-dl parsing.
2018-01-07 15:56:55 +00:00
wm4
34cf655ddd player: strictly never autoselect tracks from --external-files
Before this commit, some autoselection of tracks coming from files
loaded with --external-files was still done. This commit removes all of
it, and the only way to select a track is via the explicit stream
selection options like --vid/--sid/--aid.

I think this was always the original intention. The change could in
theory still unintentionally surprise some users, so add a changelog
entry.

This does not affect --audio-file/--sub-file, even if these contain
mismatching track types. E.g. if audio files passed to --audio-file
contain subtitles, these should still be selected. Past feature requests
indicate that users want this.
2018-01-06 14:42:22 -08:00
James Ross-Gowan
88c29b1301 vo_gpu: hwdec_dxva2dxgi: initial implementation
This enables DXVA2 hardware decoding with ra_d3d11. It should be useful
for Windows 7, where D3D11VA is not available. Images are transfered
from D3D9 to D3D11 using D3D9Ex surface sharing[1].

Following Microsoft's recommendations, it uses a queue of shared
surfaces, similar to Microsoft's ISurfaceQueue. This will hopefully
prevent surface sharing from impacting parallelism and allow multiple
D3D11 frames to be in-flight at once.

[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/ee913554.aspx
2018-01-06 11:26:15 +11:00
wm4
f798bc3c25 player: add --cache-pause-initial option to start in buffering state
Reasons why you'd want this see manpage additions. Disabled by default,
because it would increase latency of live streams by default. (Or well,
at least it would be another problem when trying getting lower latency.)
2018-01-03 15:43:51 -08:00
wm4
9c22108fec player: use fixed timeout for cache pausing (buffering) duration
This tried to be clever by waiting for a longer time each time the
buffer was underrunning, or shorter if it was getting better. I think
this was pretty weird behavior and makes no sense. If the user really
wants the stream to buffer longer, he/she/it can just pause the player
(the network caches will continue to be filled until they're full).
Every time I actually noticed this code triggering in my own use, I
didn't find it helpful. Apart from that it was pretty hard to test.

Some waiting is needed to avoid that the player just plays the available
data as fast as possible (to compensate for late frames and underrunning
audio). Just use a fixed wait time, which can now be controlled by the
new --cache-pause-wait option.
2018-01-03 15:43:51 -08:00
wm4
6092c967ab manpage: slightly improve description of --cache-pause option 2018-01-03 15:43:51 -08:00
sfan5
48943a73f6 vo_gpu/context_android: replace both options with android-surface-size
This allows us to automatically trigger a VOCTRL_RESIZE (also contained).
2018-01-02 15:04:31 -08:00
Aman Gupta
2dd020efc2 vo_gpu/android: fallback to EGL_WIDTH/HEIGHT
Uses the EGL width/height by default when the user fails to set
the android-surface-width/android-surface-height options.

This means the vo-resize command is optional, and does not need to
be implemented on android devices which do not support rotation.

Signed-off-by: Aman Gupta <aman@tmm1.net>
2018-01-01 22:21:44 -08:00
Kevin Mitchell
d9ca235c68 manpage: put android surface options on one line
This is required by rst2man.
2017-12-28 05:12:54 -07:00
wm4
d480b1261b vd_lavc: add an option to explicitly workaround x264 4:4:4 bug
Technically, the user could just use --vd-lavc-o with the same result.
But I find it better to make this an explicit option, so we can document
the ups and downs, and also avoid setting it for non-h264.
2017-12-28 00:59:22 -07:00
sfan5
0030e049cd player: add internal vo-resize command
Intended to be used with the properties from previous commit.
2017-12-27 14:29:15 -07:00
sfan5
451fc931b0 vo_gpu/context: Let embedding application handle surface resizes
The callbacks for this are Java-only and EGL does not reliably
return the correct values.
2017-12-27 14:29:15 -07:00
Niklas Haas
286d421666 vo_gpu: vulkan: allow disabling async tf/comp
Async compute in particular seems to cause problems on some drivers, and
even when supprted the benefits are not that massive from the tests I
have seen, so it's probably safe to keep off by default.

Async transfer on the other hand seems to work better and offers a more
substantial improvement, so it's kept on.
2017-12-25 00:47:53 +01:00
Niklas Haas
bded247fb5 vo_gpu: vulkan: support split command pools
Instead of using a single primary queue, we generate multiple
vk_cmdpools and pick the right one dynamically based on the intent.
This has a number of immediate benefits:

1. We can use async texture uploads
2. We can use the DMA engine for buffer updates
3. We can benefit from async compute on AMD GPUs

Unfortunately, the major downside is that due to the lack of QF
ownership tracking, we need to use CONCURRENT sharing for all resources
(buffers *and* images!). In theory, we could try figuring out a way to
get rid of the concurrent sharing for buffers (which is only needed for
compute shader UBOs), but even so, the concurrent sharing mode doesn't
really seem to have a significant impact over here (nvidia). It's
possible that other platforms may disagree.

Our deadlock-avoidance strategy is stupidly simple: Just flush the
command every time we need to switch queues, and make sure all
submission and callbacks happen in FIFO order. This required lifting the
cmds_pending and cmds_queued out from vk_cmdpool to mpvk_ctx, and some
functions died/got moved as a result, but that's a relatively minor
change.

On my hardware this is a fairly significant performance boost, mainly
due to async transfers. (Nvidia doesn't expose separate compute queues
anyway). On AMD, this should be a performance boost as well due to async
compute.
2017-12-25 00:47:53 +01:00
Niklas Haas
fb1c7bde42 vo_gpu: vulkan: properly track image dependencies
This uses the new vk_signal mechanism to order all access to textures.
This has several advantageS:

1. It allows real synchronization of image access across multiple frames
   when using multiple queues for parallelism.

2. It allows using events instead of pipeline barriers, which is a
   finer-grained synchronization primitive that allows for more
   efficient layout transitions over longer durations.

This commit also restructures some of the implicit transition code for
renderpasses to be more flexible and correct. (Note: this technically
drops the ability to transition the image out of undefined layout when
not blending, but that was a bug anyway and needs to be done properly)

vo_gpu: vulkan: remove no-longer-true optimization

The change to the output_tex format makes this no longer true, and it
actually seems to hurt performance now as well. So just don't do it
anymore. I also realized it hurts performance when drawing an OSD, so
it's probably not a good idea anyway.
2017-12-25 00:47:53 +01:00
wm4
a23a98f648 cache: lower default size to 2*10MB
Reduce it from 75MB in both directions (forward/backwards) to 10MB each.

The stream cache is kind of becoming useless in favor of the demuxer
cache. Using both doesn't make much sense, because they will contain
duplicated data for no reason.

Still leave it at 10MB, which may help with mp4 a bit. libavformat's mp4
demuxer tends to seek too much, so we try to avoid triggering network
level seeks by having some caching in the stream layer.
2017-12-23 00:32:59 +01:00
wm4
382a8ac0b0 demux: bump the demuxer cache readahead duration
Set it to 10 hours, which is practically unlimited. (Avoiding use of
"inf", since that might interact strangely with the option parser and
such.)
2017-12-23 00:32:59 +01:00
wm4
2964788055
options: deprecate --ff- options and properties
Some old crap which nobody needs and which probably nobody uses.

This relies on a GCC extension: using "## __VA_ARGS__" to remove the
comma from the argument list if the va args are empty. It's supported
by clang, and there's some chance newer standards will introduce a
proper way to do this. (Even if it breaks somewhere, it will be a
problem only for 1 release, since I want to drop the deprecated
properties immediately.)
2017-12-21 19:51:30 +01:00
Aman Gupta
7e2252688b vo_mediacodec_embed: implement hwcontext
Fixes vo_mediacodec_embed, which was broken in 80359c6615
2017-12-20 15:45:55 +11:00
wm4
d690ee0959 client API: change --stop-playback-on-init-failure default
This was off for mpv CLI, but on for libmpv. The motivation behind this
was that it would be confusing for applications if libmpv continued
playback in a severely "degraded" way (without either audio or video),
and that it would be better to fail early.

In reality the behavior was just a confusing difference to mpv CLI, and
has confused actual users as well. Get rid of it.

Not bothering with a version bump, since this is so minor, and it's easy
to ensure compatibility in affected applications by just setting the
option explicitly.

(Also adding the missing next-release-marker in client-api-changes.rst.)
2017-12-17 15:45:24 -08:00
Niklas Haas
d64c33c518 msg: bump up log level of --log-file
This now logs -v -v by default, instead of -v.
2017-12-15 22:28:47 -08:00
wm4
cedcdc1f3c vd_lavc: rename --hwdec=rpi to --hwdec=mmal
Annoying exception that makes no sense to keep. Normally, users or
client applications will either use --hwdec=auto, or not set the option
at all, which both leads to the expected result.
2017-12-15 12:32:25 +02:00
wm4
3c62a20f48 manpage: clarify --sub-file(s) options
This was a bit confused, and I bet nobody understood whether to use
--sub-file or --sub-files, and what the difference is. Explicitly
mention that both variants exist, and how they are related.
2017-12-07 23:48:16 -08:00
Aman Gupta
0c6a488ef9 options: add --start=none to reset previously set start time
Previously when using a libmpv instance to play multiple videos,
once --start was set there was no clear way to unset it. You could
use --start=0, but 0 does not always mean the beginning of the file
(especially when using --rebase-start-time=no). Looking up the start
timestamp and passing that in also does not always work, particularly
when the first timestamp is negative (since negative values to --start
have a special meaning).

This commit adds a new "none" value which maps to the internal
REL_TIME_NONE, matching the default value of the play_start option.
2017-12-06 20:50:31 +02:00
Leo Izen
a2e34b6f41 manpage: minor fixes to documenation 2017-12-06 00:11:37 -08:00
Rostislav Pehlivanov
f19797dea6 Remove support for ffmpeg-mpv 2017-12-05 08:27:55 +00:00
Leo Izen
713668b99a manpage: add some minor documenation fixes
- replace the incorrect reference to --opengl-shader
- document a caveat when using --image-display-duration
- add some documentation on --vf=lavfi=
2017-12-04 20:57:16 -05:00
Leo Izen
fdc311625e player/misc.c: allow both --length and --end to control play endpoint
Most options that change the playback endpoint coexist and playback
stops when it reaches any of them. (e.g. --ab-loop-b, --end, or
--chapter). This patch extends that behavior to --length so it isn't
automatically trumped by --end if both are present. These two will
interact now as the other options do.

This change is also documented in DOCS/man/options.rst.
2017-12-04 12:34:02 -05:00
Mariusz Skoneczko
1a9fb7937a manpage: vaapi-copy is not limited to Intel GPUs
vaapi-copy works with some AMD cards
2017-12-03 21:19:39 +01:00
Martin Herkt
1d92a804d2
man: remove incorrect note about default opengl backend 2017-12-02 06:49:12 +01:00
wm4
80359c6615 vd_lavc: drop mediacodec direct rendering support temporarily
The libavcodec mediacodec support does not conform to the new hwaccel
APIs yet. It has been agreed uppon that this glue code can be deleted
for now, and support for it will be restored at a later point.

Readding would require that it supports the AVCodecContext.hw_device_ctx
API. The hw_device_ctx would then contain the surface ID.
vo_mediacodec_embed would actually perform the task of creating
vo.hwdec_devs and adding a mp_hwdec_ctx, whose av_device_ref is a
AVHWDeviceContext containing the android surface.
2017-12-01 18:01:15 +01:00
wm4
91586c3592 vo_gpu: make it possible to load multiple hwdec interop drivers
Make the VO<->decoder interface capable of supporting multiple hwdec
APIs at once. The main gain is that this simplifies autoprobing a lot.
Before this change, it could happen that the VO loaded the "wrong" hwdec
API, and the decoder was stuck with the choice (breaking hw decoding).
With the change applied, the VO simply loads all available APIs, so
autoprobing trickery is left entirely to the decoder.

In the past, we were quite careful about not accidentally loading the
wrong interop drivers. This was in part to make sure autoprobing works,
but also because libva had this obnoxious bug of dumping garbage to
stderr when using the API. libva was fixed, so this is not a problem
anymore.

The --opengl-hwdec-interop option is changed in various ways (again...),
and renamed to --gpu-hwdec-interop. It does not have much use anymore,
other than debugging. It's notable that the order in the hwdec interop
array ra_hwdec_drivers[] still matters if multiple drivers support the
same image formats, so the option can explicitly force one, if that
should ever be necessary, or more likely, for debugging. One example are
the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both
support d3d11 input.

vo_gpu now always loads the interop lazily by default, but when it does,
it loads them all. vo_opengl_cb now always loads them when the GL
context handle is initialized. I don't expect that this causes any
problems.

It's now possible to do things like changing between vdpau and nvdec
decoding at runtime.

This is also preparation for cleaning up vd_lavc.c hwdec autoprobing.
It's another reason why hwdec_devices_request_all() does not take a
hwdec type anymore.
2017-12-01 05:57:01 +01:00
wm4
3d27a0792b af: remove deprecated audio filters
These couldn't be relicensed, and won't survive the LGPL transition. The
other existing filters are mostly LGPL (except libaf glue code).

This remove the deprecated pan option. I guess it could be restored by
inserting a libavfilter filter (if there's one), but for now let it be
gone.

This temporarily breaks volume control (and things related to it, like
replaygain).
2017-11-29 21:30:51 +01:00
wm4
23d9dc5457 video: remove automatic stereo3d filter insertion
The internal stereo3d filter was removed due to being GPL only, and due
to being a mess that somehow used libavfilter's filter. Without this
filter, it's hard to remove our internal stereo3d image attribute, so
even using libavfilter's stereo3d filter would not work too well (unless
someone fixes it and makes it able to use AVFrame metadata, which we
then could mirror in mp_image).

This was never well thought-through anyway, so just drop it. I think
some "downsampling" support would still make sense, maybe that can be
readded later.
2017-11-29 21:30:51 +01:00
Oswald Pan
ae05d1f62c manpage: clarify bitstreaming options
Changes:
List other (commonly used) bitstreamed formats.
Clarify that WASAPI can only output multichannel PCM in exclusive mode.
2017-11-19 11:34:10 -08:00
wm4
1b0dc7d169 demux: use seekable cache for network by default, bump prefetch limit
The option for enabling it has now an "auto" choice, which is the
default, and which will enable it if the media is thought to be via
network or if the stream cache is enabled (same logic as --cache-secs).

Also bump the --cache-secs default from 10 to 120.
2017-11-10 16:30:43 +01:00
wm4
6bcdcaeeea demux: set default back buffer to some high value
Some back buffer is required to make the immediate forward range
seekable. This is because the back buffer limit is strictly enforced.
Just set a rather high back buffer by default. It's not use if
--demuxer-seekable-cache is disabled, so this is without risk.
2017-11-10 12:37:19 +01:00
wm4
935e406d63 demux: support multiple seekable cached ranges
Until now, the demuxer cache was limited to a single range. Extend this
to multiple range. Should be useful for slow network streams.

This commit changes a lot in the internal demuxer cache logic, so
there's a lot of room for bugs and regressions. The logic without
demuxer cache is mostly untouched, but also involved with the code
changes. Or in other words, this commit probably fucks up shit.

There are two things which makes multiple cached ranges rather hard:

1. the need to resume the demuxer at the end of a cached range when
   seeking to it
2. joining two adjacent ranges when the lowe range "grows" into it (and
   resuming the demuxer at the end of the new joined range)

"Resuming" the demuxer means that we perform a low level seek to the end
of a cached range, and properly append new packets to it, without adding
packets multiple times or creating holes due to missing packets.

Since audio and video never line up exactly, there is no clean "cut"
possible, at which you could resume the demuxer cleanly (for 1.) or
which you could use to detect that two ranges are perfectly adjacent
(for 2.). The way how the demuxer interleaves multiple streams is also
unpredictable. Typically you will have to expect that it randomly allows
one of the streams to be ahead by a bit, and so on.

To deal with this, we have heuristics in place to detect when one packet
equals or is "behind" a packet that was demuxed earlier. We reuse the
refresh seek logic (used to "reread" packets into the demuxer cache when
enabling a track), which checks for certain packet invariants.
Currently, it observes whether either the raw packet position, or the
packet DTS is strictly monotonically increasing. If none of them are
true, we discard old ranges when creating a new one.

This heavily depends on the file format and the demuxer behavior. For
example, not all file formats have DTS, and the packet position can be
unset due to libavformat not always setting it (e.g. when parsers are
used).

At the same time, we must deal with all the complicated state used to
track prefetching and seek ranges. In some complicated corner cases, we
just give up and discard other seek ranges, even if the previously
mentioned packet invariants are fulfilled.

To handle joining, we're being particularly dumb, and require a small
overlap to be confident that two ranges join perfectly. (This could be
done incrementally with as little overlap as 1 packet, but corner cases
would eat us: each stream needs to be joined separately, and the cache
pruning logic could remove overlapping packets for other streams again.)

Another restriction is that switching the cached range will always
trigger an asynchronous low level seek to resume demuxing at the new
range. Some users might find this annoying.

Dealing with interleaved subtitles is not fully handled yet. It will
clamp the seekable range to where subtitle packets are.
2017-11-09 10:23:57 +01:00
James Ross-Gowan
e7bf5576e5 vo_gpu: hwdec_d3d11va: allow zero-copy video decoding
Like the manual says, this is technically undefined behaviour. See:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx

In particular, MSDN says texture arrays created with the BIND_DECODER
flag cannot be used with CreateShaderResourceView, which means they
can't be sampled through SRVs like normal Direct3D textures. However,
some programs (Google Chrome included) do this anyway for performance
and power-usage reasons, and it appears to work with most drivers.

Older AMD drivers had a "bug" with zero-copy decoding, but this appears
to have been fixed. See #3255, #3464 and http://crbug.com/623029.
2017-11-07 20:27:13 +11:00
James Ross-Gowan
68eac1a1e7 vo_gpu: d3d11: initial implementation
This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL
generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross.

What works:

- All of mpv's internal shaders should work, including compute shaders.

- Some external shaders have been tested and work, including RAVU and
  adaptive-sharpen.

- Non-dumb mode works, even on very old hardware. Most features work at
  feature level 9_3 and all features work at feature level 10_0. Some
  features also work at feature level 9_1 and 9_2, but without high-bit-
  depth FBOs, it's not very useful. (Hardware this old is probably not
  fast enough for advanced features anyway.)

  Note: This is more compatible than ANGLE, which requires 9_3 to work
  at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.)

- Hardware decoding with D3D11VA, including decoding of 10-bit formats
  without truncation to 8-bit.

What doesn't work / can be improved:

- PBO upload and direct rendering does not work yet. Direct rendering
  requires persistent-mapped PBOs because the decoder needs to be able
  to read data from images that have already been decoded and uploaded.
  Unfortunately, it seems like persistent-mapped PBOs are fundamentally
  incompatible with D3D11, which requires all resources to use driver-
  managed memory and requires memory to be unmapped (and hence pointers
  to be invalidated) when a resource is used in a draw or copy
  operation.

  However it might be possible to use D3D11's limited multithreading
  capabilities to emulate some features of PBOs, like asynchronous
  texture uploading.

- The blit() and clear() operations don't have equivalents in the D3D11
  API that handle all cases, so in most cases, they have to be emulated
  with a shader. This is currently done inside ra_d3d11, but ideally it
  would be done in generic code, so it can take advantage of mpv's
  shader generation utilities.

- SPIRV-Cross is used through a NIH C-compatible wrapper library, since
  it does not expose a C interface itself.

  The library is available here: https://github.com/rossy/crossc

- The D3D11 context could be made to support more modern DXGI features
  in future. For example, it should be possible to add support for
  high-bit-depth and HDR output with DXGI 1.5/1.6.
2017-11-07 20:27:13 +11:00
wm4
57248915fa demux: add option to create CC tracks eagerly
We don't hope to auto-detect them at load time, as that would be too
much of a pain - even FFmpeg requires fetching and parsing of video
packets, and exposes the information only via deprecated API.

But there still needs to be a way to select them by default. This is
also needed to get the first CC packet at all (without seeking back).

This commit also attempts to clean up locking a bit, which is a PITA,
but it's better be careful & clean.
2017-11-03 13:55:32 +01:00
wm4
4f51326c28 manpage: fix/improve --msg-level description
Fixes #5055.
2017-10-30 12:58:55 +01:00
wm4
6b745769b1 vd_lavc: add support for nvdec hwaccel
See manpage additions.

(In ffmpeg-mpv and Libav, this is still called "cuvid". Libav won't work
yet, because it has no frame params support yet, but this could get
fixed soon.)
2017-10-28 19:59:08 +02:00
Niklas Haas
c2d4fd0ef4 vo_gpu: change --tone-mapping-desaturate algorithm
Comparing mpv's implementation against the ACES ODR reference samples
and algorithms, it seems like they're happy desaturating highlights
_way_ more aggressively than mpv currently does. And indeed, looking at
some example clips like The Redwoods (which is actually well-mastered),
the current desaturation produces unnatural-looking brightness fringes
where the sky meets the treeline.

Adjust the algorithm to make it apply to a much larger, more gradual
brightness region; and change the interpretation of the parameter. As a
bonus, the new parameter is actually sanely scaled (higher values = more
desaturation). Also, make it scale based on the signal level instead of
the luminance, to avoid under-desaturating bright blues.
2017-10-25 17:24:27 +02:00
wm4
4593002222 manpage: add --hwdec=rkmpp entry 2017-10-23 21:12:45 +02:00
wm4
719a435d36 demux: add a back buffer and the ability to seek into it
This improves upon the previous commit, and partially rewrites it (and
other code). It does:

- disable the seeking within cache by default, and add an option to
  control it
- mess with the buffer estimation reporting code, which will most likely
  lead to funny regressions even if the new features are not enabled
- add a back buffer to the packet cache
- enhance the seek code so you can seek into the back buffer
- unnecessarily change a bunch of other stuff for no reason
- fuck up everything and vomit ponies and rainbows

This should actually be pretty usable. One thing we should add are some
properties to report the proper buffer state. Then the OSC could show a
nice buffer range. Also configuration of the buffers could be made
simpler. Once this has been tested enough, it can be enabled by default,
and might replace the stream cache's byte ringbuffer.

In addition it may or may not be possible to keep other buffer ranges
when seeking outside of the current range, but that would be much more
complex.
2017-10-21 19:26:33 +02:00
James Ross-Gowan
d9e3bad500 vo_gpu: add rgba16hf to the list of FBO formats
This should be functionally identical to rgba16f, since the formats only
differ in their representation on the CPU, but it could be useful for RA
backends that don't expose rgba16f, like Vulkan. It's definitely useful
for the WIP D3D11 backend.
2017-10-18 23:55:13 +11:00
wm4
b135af6842 video: add mp_image_params.hw_flags and add an example
It seems this will be useful for Rokchip DRM hwcontext integration.

DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.

We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.

This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.

The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
2017-10-16 15:02:12 +02:00
wm4
ac295960b8 video: make it possible to always override hardware decoding format
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.

We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
2017-10-16 15:02:12 +02:00
wm4
7cfae5adce vo_gpu: semi-fix --gpu-context/--gpu-api options and help output
This was confusing at best. Change it to output the actual choices.
(Seems like in the end it's always me who has to clean up other people's
bullshit.)

Context names were not unique - but they should be, so fix it. The whole
point of the original --opengl-backend option was to side-step the
tricky auto-detection, so you know exactly what you get. The goal of
this commit is to make --gpu-context work the same way. Fix the
non-unique names by appending "vk" to the names.

Keep in mind that this was not suitable for slecting the "UI" backend
anyway, since "x11" would force GLX, whereas people on not-NVIDIA
actually want "x11egl". Users trying to use --gpu-context=x11 to force
the X11 backend would always end up with GLX, which would at least break
VAAPI hardware decoding for them. Basically the idea that this option
could select the "UI" type is completely broken - it selects an
implementation, which implies a UI. Selecting the UI type This would
require a separate mechanism. (Although in theory this separate
mechanism could be part of the --gpu-context option - in any case,
someone would have to implement it.)

To achieve help output that can actually be understood, just duplicate
the code. Most of that code is duplicated anyway, and trying to share
just the list code with the result of making the output unreadable
doesn't make too much sense. If we wanted to save code/effort, we could
just remove the help output altogether.

--gpu-api has non-unique entries, and it would be nice to group them
(e.g. list all OpenGL capable contexts with "opengl"), but C makes this
simple idea too much of a pain, so don't do it.

Also remove a stray tab from the android entry on the manpage.
2017-10-16 10:57:51 +02:00
James Ross-Gowan
6d534138ed manpage: add Vulkan WSI extension name for --gpu-context=win
This matches the other Vulkan contexts.
2017-10-14 17:48:13 +11:00
wm4
902ae9ae41 options: add --vlang switch
For symmetry with --alang and --slang. 100% useless, but why not?
2017-10-13 00:31:43 +02:00
Julian
92a9150cc2 lua: integrate stats.lua script
Signed-off-by: wm4 <wm4@nowhere>

Rename --stats to --load-stats-overlay and add an entry to options.rst
over the original commit.

Signed-off-by: wm4 <wm4@nowhere>
2017-10-09 20:47:33 +02:00
Aman Gupta
8fc21fd0d5 vo_gpu: add android opengl backend
At the moment, rendering on Android requires ``--vo=opengl-cb`` and
a lot of java<->c++ bridging code to receive the receive and react to
the render callback in java. Performance also suffers with opengl-cb,
due to the overhead of context switching in JNI.

With this patch, Android can render using ``--vo=gpu --gpu-context=android``
(after setting ``--wid`` to point to an android.view.Surface on-screen).
2017-10-09 18:36:54 +02:00
Aman Gupta
61a1612de9 hwdec: add mediacodec hardware decoder for IMGFMT_MEDIACODEC frames 2017-10-09 18:36:54 +02:00
Aman Gupta
d08e407c9e hwdec: rename mediacodec to mediacodec-copy 2017-10-09 18:36:54 +02:00
Rostislav Pehlivanov
9c806bc299 Revert "wayland_common: add support for embedding"
This reverts commit 8d8d4c5cb1.
2017-10-05 17:43:47 +01:00
Rostislav Pehlivanov
8d8d4c5cb1 wayland_common: add support for embedding 2017-10-05 16:23:15 +01:00
wm4
10dd120baa msg: make --msg-level affect --log-file too
But --msg-level can only raise the log level used for --log-file,
because the original idea with --log-file was that it'd log verbose
messages to disk even if terminal logging is lower than -v or fully
disabled.
2017-10-04 22:08:19 +02:00
Kranky K. Krackpot
910600a36f Man page: fix typo
Man page: fix typo as of https://github.com/mpv-player/mpv/issues/4913
2017-10-01 20:51:21 +11:00
Leo Izen
052ae5393a manpage: update --blend-subtitles affected options
Changed the reference from --gpu-gamma to --gamma-factor,
and changed the reference from --post-shader to --glsl-shaders,
in order to reflect actual changes to the option names.
2017-09-29 14:38:47 -04:00