0
0
mirror of https://github.com/mpv-player/mpv.git synced 2024-09-20 12:02:23 +02:00
Commit Graph

45781 Commits

Author SHA1 Message Date
wm4
274cc06aaf ao_alsa: change license to LGPL
Looks like this is covered by LGPL relicensing agreements now.

Notes about contributors who could not be reached or who didn't agree:

Commit 7fccb6486e has tons of mp_msg changes look like they are not
copyrightable (even if they were, all mp_msg calls were rewritten in
mpv times again). The additional play() change looks suspicious, but
the function was rewritten several times anyway (first time after that
commit in 4f40ec312).

Commit 89ed1748ae was rewritten in commit 325311af3 and then again
several times after that. Basically all this code is unnecessary in
modern mpv and has been removed.

No code survived from the following commits: 4d31c3c53, 61ecf838f2,
d38968bd, 4deb67c3f. At least two cosmetic typo fixes are not
considered as well.

Commit 22bb046ad is reverted (this wasn't a valid warning anyway, just
a C++-ism icc applied to C). Using the constants is nicer, but at least
I don't have to decide whether that change was copyrightable.
2017-11-23 16:43:59 +01:00
wm4
b2a08db71a ao_alsa: don't convert twice on retry
Obscure corner case.
2017-11-23 16:43:59 +01:00
Oswald Pan
ae05d1f62c manpage: clarify bitstreaming options
Changes:
List other (commonly used) bitstreamed formats.
Clarify that WASAPI can only output multichannel PCM in exclusive mode.
2017-11-19 11:34:10 -08:00
James Ross-Gowan
4569f39da5 vo_gpu: d3d11: don't use runtime version for UAV slot count
FL 11_1 is only supported with the Direct3D 11.1 runtime anyway, so
there is no need to check both the runtime version and the feature
level.
2017-11-19 21:49:27 +11:00
James Ross-Gowan
b9c742df10 vo_gpu: d3d11_helpers: don't try BGRA_SUPPORT
The D3D11_CREATE_DEVICE_BGRA_SUPPORT flag doesn't enable support for
BGRA textures. BGRA textures will be supported whether or not the flag
is passed. The flag just fails device creation if they are not supported
as an API convenience for programs that need BGRA textures, such as
programs that use D2D or D3D9 interop. We can handle devices without
BGRA support fine, so don't bother with the flag.
2017-11-19 21:37:57 +11:00
James Ross-Gowan
021fe791e1 vo_gpu: d3d11: mark the bgra8 format as unordered
Whoops. I was confused by the double-negative here.
2017-11-19 21:17:31 +11:00
James Ross-Gowan
0bdbe2bc53 win32: fix semantics of POSIX 2008 locale stubs
This sliences some warnings about unused values and statements with no
effect, but it also fixes a logic error with freelocale(), since
previously it would not work as expected when used in the body of an if
statement without braces.

Uses real functions, because with macros, I don't think there is a way
to silence the "statement with no effect" warnings in the case where the
return value of uselocale() is ignored.

As for replacing the these functions with working implementations, I
don't think this is possible for mpv's use-case, since MSVCRT does not
support UTF-8 locales, or any locale with multibyte characters that are
three or more bytes long. See:
https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/setlocale-wsetlocale
2017-11-19 21:10:56 +11:00
wm4
cd6f964b56 demux_mkv: remove unnecessary parsing for vp9
We can finally get rid of this crap.

Depends on a ffmpeg-mpv change. Always worked with Libav (ever since
they fixed it properly).
2017-11-17 14:18:57 +01:00
pavelxdd
35d74e1612 w32_common: move imm32.dll function to w32->api struct
For consistency with already implemented shcore.dll
function loading in w32->api:

Moved loading of imm32.dll to w32_api_load, and declare
pImmDisableIME function pointer in the w32->api struct.
Removed unloading of imm32.dll.
2017-11-15 22:54:54 +11:00
sfan5
3471e27efb vo_gpu/context_android: Process surface resizes correctly 2017-11-14 20:46:18 +02:00
James Ross-Gowan
ff1ee66231 appveyor: use git submodule update --init
Thanks @jeeb.
2017-11-13 23:19:03 +11:00
wm4
41243e7c4f demux_lavf: always give libavformat the filename when probing
This gives the filename or URL to the libavformat probing logic, which
might use the file extension as a "help" to decide which format the file
is. This helps with mp3 files that have large id3v2 tags and prevents
the idiotic ffmpeg probing logic to think that a mp3 file is amr.

(What we really want is knowing whether we _really_ need to feed more
data to libavformat to detect the format. And without having to pre-read
excessive amounts of data for relatively normal streams.)
2017-11-12 19:38:45 +01:00
wm4
41cefe3e1f stream_libarchive, osdep: use stubs for POSIX 2008 locale on MinGW 2017-11-12 16:43:34 +01:00
wm4
987291d042 demux_playlist: support .url files
Requested. Not tested due to lack of real samples. Fixes #5107.
2017-11-12 15:51:48 +01:00
wm4
e2d0e0c6cb build: enable libarchive by default
Or libcve, as the vlc developers call it.
2017-11-12 13:51:30 +01:00
wm4
9922b6ff43 vo_gpu: ra_gl: remove stride hack
Same reasoning as in commit 9b5d062d36.
2017-11-12 13:49:00 +01:00
wm4
1e70e82baa stream_libarchive: workaround various types of locale braindeath
Fix that libarchive fails to return filenames for UTF-8/UTF-16 entries.
The reason is that it uses locales and all that garbage, and mpv does
not set a locale.

Both C locales and wchar_t are shitfucked retarded legacy braindeath. If
the C/POSIX standard committee had actually competent members, these
would have been deprecated or removed long ago. (I mean, they managed to
remove gets().) To justify this emotional outbreak potentially insulting
to unknown persons, I will write a lot of text. Those not comfortable
with toxic language should pretend this is a religious text.

C locales are supposed to be a way to support certain languages and
cultures easier. One example are character codepages. Back when UTF-8
was not invented yet, there were only 255 possible characters, which is
not enough for anything but English and some european languages. So they
decided to make the meaning of a character dependent on the current
codepage. The locale (LC_CTYPE specifically) determines what character
encoding is currently used.

Of course nowadays, this is legacy nonsense. Everything uses UTF-8 for
"char", and what doesn't is broken and terrible anyway. But the old ways
stayed with us, and the stupidity of it as well.

C locales were utterly moronic even when they were invented. The locale
(via setlocale()) is global state, and global state is not a reasonable
way to do anything. It will break libraries, or well modularized code.
(The latter would be forced to strictly guard all entrypoints set
set/restore locales, assuming a single threaded world.)

On top of that, setting a locale randomly changes the semantics of a
bunch of standard functions. If a function respects locale, you suddenly
can't rely on it to behave the same on all systems. Some behavior can
come as a surprise, and of course it will be dependent on the region of
the user (it doesn't help that most software is US-centric, and the US
locale is almost like the C locale, i.e. almost what you expect).

Idiotically, locales were not just used to define the current character
encoding, but the concept was used for a whole lot of things, like e. g.
whether numbers should use "," or "." as decimal separaror. The latter
issue is actually much worse, because it breaks basic string conversion
or parsing of numbers for the purpose of interacting with file formats
and such.

Much can be said about how retarded locales are, even beyond what I just
wrote, or will wrote below. They are so hilariously misdesigned and
insufficient, I can't even fathom how this shit was _standardized_. (In
any case, that meant everyone was forced to implement it.) Many C
functions can't even do it correctly. For example, the character set
encoding can be a multibyte encoding (not just UTF-8, but awful garbage
like Shift JIS (sometimes called SHIT JIZZ), yet functions like
toupper() can return only 1 byte. Or just take the fact that the locale
API tries to define standard paper sizes (LC_PAPER) or telephone number
formatting (LC_TELEPHONE). Who the fuck uses this, or would ever use
this?

But the badness doesn't stop here. At some point, they invented threads.
And they put absolutely no thought into how threads should interact with
locales. So they kept locales as global state. Because obviously, you
want to be able to change the semantics of basic string processing
functions _while_ they're running, right? (Any thread can call
setlocale() at any time, and it's supposed to change the locale of all
other threads.)

At this point, how the fuck are you supposed to do anything correctly?
You can't even temporarily switch the locale with setlocale(), because
it would asynchronously fuckup the other threads. All you can do is to
enforce a convention not to set anything but the C local (this is what
mpv does), or to duplicate standard functions using code that doesn't
query locale (this is what e.g. libass does, a close dependency of mpv).

Imagine they had done this for certain other things. Like errno, with
all the brokenness of the locale API. This simply wouldn't have worked,
shit would just have been too broken. So they didn't. But locales give a
delicious sweet spot of brokenness, where things are broken enough to
cause neverending pain, but not broken enough that enough effort would
have spent to fix it completely.

On that note, standard C11 actually can't stringify an error value. It
does define strerror(), but it's not thread safe, even though C11
supports threads. The idiots could just have defined it to be thread
safe. Even if your libc is horrible enough that it can't return string
literals, it could just just some thread local buffer. Because C11 does
define thread local variables. But hey, why care about details, if you
can just create a shitty standard?

(POSIX defines strerror_r(), which "solves" this problem, while still
not making strerror() thread safe.)

Anyway, back to threads. The interaction of locales and threads makes no
sense. Why would you make locales process global? Who even wanted it to
work this way? Who decided that it should keep working this way, despite
being so broken (and certainly causing implementation difficulties in
libc)? Was it just a fucked up psychopath?

Several decades later, the moronic standard committees noticed that this
was (still is) kind of a bad situation. Instead of fixing the situation,
they added more garbage on top of it. (Probably for the sake of
"compatibility"). Now there is a set of new functions, which allow you
to override the locale for the current thread. This means you can
temporarily override and restore the local on all entrypoints of your
code (like you could with setlocale(), before threads were invented).

And of course not all operating systems or libcs implement this. For
example, I'm pretty sure Microsoft doesn't. (Microsoft got to fuck it up
as usual, and only provides _configthreadlocale(). This is shitfucked on
its own, because it's GLOBAL STATE to configure that GLOBAL STATE should
not be GLOBAL STATE, i.e. completely broken garbage, because it requires
agreement over all modules/libraries what behavior should be used. I
mean, sure, makign setlocale() affect only the current thread would have
been the reasonable behavior. Making this behavior configurable isn't,
because you can't rely on what behavior is active.)

POSIX showed some minor decency by at least introducing some variations
of standard functions, which have a locale argument (e.g. toupper_l()).
You just pass the locale which you want to be used, and don't have to do
the set locale/call function/restore locale nonense. But OF COURSE they
fucked this up too. In no less than 2 ways:

- There is no statically available handle for the C locale, so you have
  to initialize and store it somewhere, which makes it harder to make
  utility functions safe, that call locale-affected standard functions
  and expect C semantics. The easy solution, using pthread_once() and a
  global variable with the created locale, will not be easily accepted
  by pedantic assholes, because they'll worry about allocation failure,
  or leaking the locale when using this in library code (and then
  unloading the library). Or you could have complicated library
  init/uninit functions, which bring a big load of their own mess.
  Same for automagic DLL constructors/destructors.
- Not all functions have a variant that takes a locale argument, and
  they missed even some important ones, like snprintf() or strtod() WHAT
  THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT
  THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK

I would like to know why it took so long to standardize a half-assed
solution, that, apart from being conceptually half-assed, is even
incomplete and insufficient. The obvious way to fix this would have
been:

- deprecate the entire locale API and their use, and make it a NOP
- make UTF-8 the standard character type
- make the C locale behavior the default
- add new APIs that explicitly take locale objects
- provide an emulation layer, that can be used to transparently build
  legacy code without breaking them

But this wouldn't have been "compatible", and the apparently incompetent
standard committees would have never accepted this. As if anyone
actually used this legacy garbage, except other legacy garbage. Oh yeah,
and let's care a lot about legacy compatibility, and let's not care  at
all about modern code that either has to suffer from this, or subtly
breaks when the wrong locales are active.

Last but not least, the UTF-8 locale name is apparently not even
standardized. At the moment I'm trying to use "C.UTF-8", which is
apparently glibc _and_ Debian specific. Got to use every opportunity to
make correct usage of UTF-8 harder. What luck that this commit is only
for some optional relatively obscure mpv feature.

Why is the C locale not UTF-8? Why did POSIX not standardize an UTF-8
locale? Well, according to something I heard a few years ago, they're
considering disallowing UTF-8 as locale, because UTF-8 would violate
certain ivnariants expected by C or POSIX. (But I'm not sure if I
remember this correctly - probably better not to rage about it.)

Now, on to libarchive.

libarchive intentionally uses the locale API and all the broken crap
around it to "convert" UTF-8 or UTF-16 (as contained in reasonably sane
archive formats) to "char*". This is a good start!

Since glibc does not think that the C locale uses UTF-8, this fails for
mpv. So trying to use archive_entry_pathname() to get the archive entry
name fails if the name contains non-ASCII characters.

Maybe use archive_entry_pathname_utf8()? Surely that should return
UTF-8, since its name seems to indicate that it returns UTF-8. But of
fucking course it doesn't! libarchive's horribly convoluted code (that
is full of locale API usage and other legacy shit, as well as ifdefs and
OS specific code, including Windows and fucking Cygwin) somehow fucks up
and fails if the locale is not set to UTF-8. I made a PR fixing this in
libarchive almost 2 years ago, but it was ignored.

So, would archive_entry_pathname_w() as fallback work? No, why would it?
Of course this _also_ involves shitfucked code that calls shitfucked
standard functions (or OS specific ifdeffed shitfuck). The truth is that
at least glibc changes the meaning of wchar_t depending on the locale.
Unlike most people think, wchar_t is not standardized to be an UTF
variant (or even unicode) - it's an encoding that uses basic units that
can be larger than 8 bit. It's an implementation defined thing. Windows
defines it to 2 bytes and UTF-16, and glibc defines it to 4 bytes and
UTF-32, but only if an UTF-8 locale is set (apparently).

Yes. Every libarchive function dealing with strings has 3 variants:
plain, _utf8, and _w. And none of these work if the locale is not set.
I cannot fathom why they even have a wchar_t variant, because it's
redundant and fucking useless for any modern code.

Writing a UTF-16 to UTF-8 conversion routine is maybe 3 pages of code,
or a few lines if you use iconv. But libarchive uses all this glorious
bullshit, and ends up with 3 not working API functions, and with over
4000 lines of its own string abstraction code with gratuitous amounts of
ifdefs and OS dependent code that breaks in a fairly common use case.

So what we do is:

- Use the idiotic POSIX 2008 API (uselocale() etc.) (Too bad for users
  who try to build this on a system that doesn't have these - hopefully
  none are left in 2017. But if there are, torturing them with obscure
  build errors is probably justified. Might be bad for Windows though,
  which is a very popular platform except on phones.)
- Use the "C.UTF-8" locale, which is probably not 100% standards
  compliant, but works on my system, so it's fine.
- Guard every libarchive call with uselocale() + restoring the locale.
- Be lazy and skip some libarchive calls. Look forward to the unlikely
  and astonishingly stupid bugs this could produce.

We could also just set a C UTF-8 local in main (since that would have no
known negative effects on the rest of the code), but this won't work for
libmpv.

We assume that uselocale() never fails. In an unexplainable stroke of
luck, POSIX made the semantics of uselocale() nice enough that user code
can fail failures without introducing crash or security bugs, even if
there should be an implementation fucked up enough where it's actually
possible that uselocale() fails even with valid input.

With all this shitty ugliness added, it finally works, without fucking
up other parts of the player. This is still less bad than that time when
libquivi fucked up OpenGL rendering, because calling a libquvi function
would load some proxy abstraction library, which in turn loaded a KDE
plugin (even if KDE was not used), which in turn called setlocale()
because Qt does this, and consequently made the mpv GLSL shader
generation code emit "," instead of "." for numbers, and of course only
for users who had that KDE plugin installed, and lived in a part of the
world where "." is not used as decimal separator.

All in all, I believe this proves that software developers as a whole
and as a culture produce worse results than drug addicted butt fucked
monkeys randomly hacking on typewriters while inhaling the fumes of a
radioactive dumpster fire fueled by chinese platsic toys for children
and Elton John/Justin Bieber crossover CDs for all eternity.
2017-11-12 13:36:56 +01:00
James Ross-Gowan
9b5d062d36 vo_gpu: d3d11: remove flipped texture upload hack
Made unnecessary by 4a6b04bdb9.
2017-11-12 17:10:22 +11:00
Akemi
007cd24c2b osx: fix the bundle $PATH yet again
we have 5 parameters for the string but only 4 were being used.
2017-11-11 19:21:21 +01:00
Akemi
fb1d3caa9e cocoa: always return the target NSRect when in fullscreen
there is no need to calculate a new rectangle when in fullscreen since
we always want to cover the whole screen. so just return the target
rectangle.
2017-11-11 19:19:28 +01:00
wm4
871a8a316a demux: avoid queue overflow warning when joining two ranges
If the backbuffer is much larger than the forward buffer, and if you
join a small range with a large range (larger than the forward buffer),
then the seek issues to the end of the range after joining will overflow
the queue.

Normally, read_more will be false when the forward buffer is full, but
the resume seek after joining will set need_refresh to true, which
forces more reading and thus triggers the overfloe warning.

Attempt to fix this by not setting read_more to true on refresh seeks.
Set prefetch_more instead. read_more will still be set if an A/V stream
has no data.

This doesn't help with the following problems related to using refresh
seeks for track switching:

- If the forward buffer is full, then enabling another track will
  obviously immediately overflow the queue, and immediately lead to
  marking the new track as having no more data (i.e. EOF). We could cut
  down the forward buffer or so, but there's no simple way to implement
  it. Another possibility would be dropping all buffers and trying to
  resume again, but this would likely be complex as well.
- Subtitle tracks will not even show a warning (because they are sparse,
  and we have no way of telling whether a packet is missing, or there's
  just no packet near the current position). Before this commit,
  enabling an empty subtitle track would probably have overflown the
  queue, because ds->refreshing was never set to true. Possibly this
  could be solved by determining a demuxer read position, which would
  reflect until which PTS all subtitle packets should have been demuxed.

The forward buffer limit was intended as a last safeguard to avoid
excessive memory usage against badly interleaved files or decoders going
crazy (up to reading the whole into memory and OOM'ing the user's
system). It's not good at all to limit prefetch. Possibly solutions
include having another smaller limit for prefetch, or maybe having only
a total buffer limit, and discarding back buffer if more data has to be
read. The current solution is making the forward buffer larger than the
forward duration (--cache-secs) would require, but of course this
depends on the stream's bitrate.
2017-11-11 06:23:50 +01:00
wm4
8e50dc1b4d demux: export demuxer cache sizes in bytes
Plus sort of document them, together with the already existing
undocumented fields. (This is mostly for debugging, so use is
discouraged.)
2017-11-10 16:43:18 +01:00
wm4
1b0dc7d169 demux: use seekable cache for network by default, bump prefetch limit
The option for enabling it has now an "auto" choice, which is the
default, and which will enable it if the media is thought to be via
network or if the stream cache is enabled (same logic as --cache-secs).

Also bump the --cache-secs default from 10 to 120.
2017-11-10 16:30:43 +01:00
wm4
618b8a33e5 demux_mkv: fix potential uninitialized variable read 2017-11-10 12:49:53 +01:00
wm4
6bcdcaeeea demux: set default back buffer to some high value
Some back buffer is required to make the immediate forward range
seekable. This is because the back buffer limit is strictly enforced.
Just set a rather high back buffer by default. It's not use if
--demuxer-seekable-cache is disabled, so this is without risk.
2017-11-10 12:37:19 +01:00
wm4
b0de1ac36c demux: limit number of seek ranges to a static maximum
Limit the number of cached ranges to MAX_SEEK_RANGES, which is the same
number of maximally exported seek ranges. It makes no sense to keep
them, as the user won't see them anyway. Remove the smallest ones to
enforce the limit if the number grows too high.
2017-11-10 12:32:40 +01:00
wm4
c8bb78bad8 demux: speed up cache seeking with a coarse index
Helps a little bit, I guess. But in general, t(h)rashing the cache kills
us anyway.

This has a fixed number of index entries. Entries are added/removed as
packets go through the packet queue. Only keyframes after index_distance
seconds are added. If there are too many keyframe packets, the existing
index is reduced by half, and index_distance is doubled. This should
provide somewhat even spacing between the entries.
2017-11-10 12:17:34 +01:00
wm4
2485b899c3 demux: avoid wasting time by stopping packet search as early as possible
The packet queue is sorted, so we can stop the search if we have found a
packet, and the next packet in the queue has a higher PTS than the seek
PTS (for the sake of SEEK_FORWARD, we still consider the first packet
with a higher PTS).

Also, as a mostly cosmetic change, but which might be "faster", check
target for NULL, instead of target_diff for a magic float value.
2017-11-10 12:11:33 +01:00
wm4
968a24772e demux: simplify remove_packet() function
Turns out this is only ever used to remove the head element anyway.
2017-11-10 11:35:19 +01:00
wm4
65d36013dd demux: fix failure to join ranges with subtitles in some cases
Subtitle streams are sparse, and no overlap is required to correctly
join two cached ranges. This was not correctly handled iff the two
ranges had different subtitle packets close to the join point.
2017-11-10 11:06:07 +01:00
wm4
4681b4b28b demux: reverse which range is reused when joining them
Which one to use doesn't really matter, but reusing the first one will
probably be slightly more convenient later on.
2017-11-10 10:46:54 +01:00
wm4
f123cc4c9b demux: fix a race condition with async seeking
demux_add_packet() must completely ignore any packets that are added
while a queued seek is not initiated yet.

The main issue is that after setting in->seeking==true, the central lock
is released, and it can take "a while" until it's reacquired on the
demux thread and the seek is actually initiated. During that time,
packets could be read and added, that have nothing to do with the new
state.
2017-11-10 10:23:49 +01:00
wm4
d3bc93cf2e demux: get rid of an unnecessary field 2017-11-10 10:15:37 +01:00
wm4
4a6b04bdb9 vo_gpu: never pass flipped images to ra or ra backends
Seems like the last refactor to this code broke playing flipped images,
at least with --opengl-pbo --gpu-api=opengl.

Flipping is rather a shitmess. The main problem is that OpenGL does not
support flipped uploading. The original vo_gl implementation considered
it important to handle the flipped case efficiently, so instead of
uploading the image line by line backwards, it uploaded it flipped, and
then flipped it in the renderer (basically the upload path ignored the
flipping). The ra code and backends probably have an insane and
inconsistent mix of semantics, so fix this by never passing it flipped
images in the first place.

In the future, the backends should probably support flipped images
directly.

Fixes #5097.
2017-11-10 10:06:33 +01:00
wm4
ed1af99727 demux: attempt to accurately reflect seek range with muxed subtitles
If subtitles are part of the stream, determining the seekable range
becomes harder. Subtitles are sparse, and can have packets in irregular
intervals, or even completely lack packets. The usual logic of computing
the seek range by the min/max packet timestamps fails.

Solve this by making the only assumption we can make: subtitle packets
are implicitly demuxed along with other packets. We also assume perfect
interleaving for this, but you really can't do anything with sparse
packets that makes sense without this assumption.

One special case is if we prune sparse packets within the current
seekable range. Obviously this should limit the seekable range to after
the pruned packet.
2017-11-10 08:57:37 +01:00
wm4
7c1e7468e6 demux: reduce indentation for two functions
Remove the single-exit, that added a huge if statement containing
everything, just for some corner case.
2017-11-10 04:41:56 +01:00
wm4
6493ef2e34 demux: some minor mostly cosmetics
None of these change functionality in any way (although the log level
changes obviously change the terminal output).
2017-11-10 04:36:50 +01:00
wm4
a0b6f805f9 demux: simplify a function
update_stream_selection_state() doesn't need all these arguments. Not
sure what I was thinking here.
2017-11-10 04:08:15 +01:00
wm4
8c5ef33044 demux: change how refreshes on track switching are handled
Instead of weirdly deciding this on every packet read and having the
code far away from where it's actually needed, just run it where it's
actually needed.
2017-11-10 03:56:24 +01:00
wm4
25ed7ff0c8 demux: get rid of weird backwards loop
A typical idiom for calling functions that remove something from the
array being iterated, but it's not needed here. I have no idea why this
was ever done.
2017-11-10 03:26:40 +01:00
wm4
9c330b53e3 demux: avoid broken readahead when joining ranges
Setting ds->refreshing for unselected streams could lead to a
nonsensical queue overflow warning, because read_packet() took it as a
reason to continue reading.

Also add some more information to the queue overflow warning (even if
that one doesn't have anything to do with this bug - it was for
unselected streams only).
2017-11-10 03:19:25 +01:00
wm4
c494049e76 demux: reduce difference between threaded and non-threaded mode
This fixes an endless loop with threading disabled, such as for example
when playing a file with an external subtitle file, and seeking to the
beginning. Something will set in->seeking, but the seek is never
executed, which made demux_read_packet() loop endlessly. (External
subtitles will use non-threaded mode for whatever reasons.)

Fix this by by making the unthreaded code to execute the worker thread
body, which reduces the difference in logic.
2017-11-10 02:55:02 +01:00
wm4
935e406d63 demux: support multiple seekable cached ranges
Until now, the demuxer cache was limited to a single range. Extend this
to multiple range. Should be useful for slow network streams.

This commit changes a lot in the internal demuxer cache logic, so
there's a lot of room for bugs and regressions. The logic without
demuxer cache is mostly untouched, but also involved with the code
changes. Or in other words, this commit probably fucks up shit.

There are two things which makes multiple cached ranges rather hard:

1. the need to resume the demuxer at the end of a cached range when
   seeking to it
2. joining two adjacent ranges when the lowe range "grows" into it (and
   resuming the demuxer at the end of the new joined range)

"Resuming" the demuxer means that we perform a low level seek to the end
of a cached range, and properly append new packets to it, without adding
packets multiple times or creating holes due to missing packets.

Since audio and video never line up exactly, there is no clean "cut"
possible, at which you could resume the demuxer cleanly (for 1.) or
which you could use to detect that two ranges are perfectly adjacent
(for 2.). The way how the demuxer interleaves multiple streams is also
unpredictable. Typically you will have to expect that it randomly allows
one of the streams to be ahead by a bit, and so on.

To deal with this, we have heuristics in place to detect when one packet
equals or is "behind" a packet that was demuxed earlier. We reuse the
refresh seek logic (used to "reread" packets into the demuxer cache when
enabling a track), which checks for certain packet invariants.
Currently, it observes whether either the raw packet position, or the
packet DTS is strictly monotonically increasing. If none of them are
true, we discard old ranges when creating a new one.

This heavily depends on the file format and the demuxer behavior. For
example, not all file formats have DTS, and the packet position can be
unset due to libavformat not always setting it (e.g. when parsers are
used).

At the same time, we must deal with all the complicated state used to
track prefetching and seek ranges. In some complicated corner cases, we
just give up and discard other seek ranges, even if the previously
mentioned packet invariants are fulfilled.

To handle joining, we're being particularly dumb, and require a small
overlap to be confident that two ranges join perfectly. (This could be
done incrementally with as little overlap as 1 packet, but corner cases
would eat us: each stream needs to be joined separately, and the cache
pruning logic could remove overlapping packets for other streams again.)

Another restriction is that switching the cached range will always
trigger an asynchronous low level seek to resume demuxing at the new
range. Some users might find this annoying.

Dealing with interleaved subtitles is not fully handled yet. It will
clamp the seekable range to where subtitle packets are.
2017-11-09 10:23:57 +01:00
James Ross-Gowan
bd4ec8e4e1 appveyor: update ffmpeg and test d3d11/vulkan
Build ffmpeg-mpv, shaderc and crossc from source, since they are not
packaged in MSYS2. Also, add some more explicit --enable flags to the
mpv build to make sure things like D3D11, D3D11VA hwaccels and Vulkan
are auto-detected.
2017-11-08 07:22:54 +11:00
James Ross-Gowan
e7bf5576e5 vo_gpu: hwdec_d3d11va: allow zero-copy video decoding
Like the manual says, this is technically undefined behaviour. See:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx

In particular, MSDN says texture arrays created with the BIND_DECODER
flag cannot be used with CreateShaderResourceView, which means they
can't be sampled through SRVs like normal Direct3D textures. However,
some programs (Google Chrome included) do this anyway for performance
and power-usage reasons, and it appears to work with most drivers.

Older AMD drivers had a "bug" with zero-copy decoding, but this appears
to have been fixed. See #3255, #3464 and http://crbug.com/623029.
2017-11-07 20:27:13 +11:00
James Ross-Gowan
b258d82d6e vo_gpu: d3d11: enhance cache invalidation
The shader cache in ra_d3d11 caches the result of shaderc, crossc and
the D3DCompiler DLL, so it should be invalidated when any of those
components are updated. This should make the cache more reliable, which
makes it safer to enable gpu-shader-cache-dir. Shader compilation is
slow with D3D11, so gpu-shader-cache-dir is highly necessary
2017-11-07 20:27:13 +11:00
James Ross-Gowan
b9c1286893 vo_gpu: d3d11: log shader compilation times
Some shaders take a _long_ time to compile with the Direct3D compiler.
The ANGLE backend had this problem too, to a certain extent. Logging
should help identify which shaders cause long stalls and could also help
with benchmarking ways of reducing compile times.
2017-11-07 20:27:13 +11:00
James Ross-Gowan
4b014b3a81 vo_gpu: move d3d11_screenshot to shared code
This can be used by the ANGLE backend and ra_d3d11.
2017-11-07 20:27:13 +11:00
James Ross-Gowan
9b2dae79b1 vo_gpu: d3d11: add RA caps for ra_d3d11
ra_d3d11 uses the SPIR-V compiler to translate GLSL to SPIR-V, which is
then translated to HLSL. This means it always exposes the same GLSL
version that the SPIR-V compiler supports (4.50 for shaderc/glslang.)

Despite claiming to support GLSL 4.50, some features that are tied to
the GLSL version in OpenGL are not supported by ra_d3d11 when targeting
legacy Direct3D feature levels.

This includes two features that mpv relies on:
- Reading from gl_FragCoord in the fragment shader (requires FL 10_0)
- textureGather from any texture component (requires FL 11_0)

These features have been exposed as new RA caps.
2017-11-07 20:27:13 +11:00
James Ross-Gowan
68eac1a1e7 vo_gpu: d3d11: initial implementation
This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL
generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross.

What works:

- All of mpv's internal shaders should work, including compute shaders.

- Some external shaders have been tested and work, including RAVU and
  adaptive-sharpen.

- Non-dumb mode works, even on very old hardware. Most features work at
  feature level 9_3 and all features work at feature level 10_0. Some
  features also work at feature level 9_1 and 9_2, but without high-bit-
  depth FBOs, it's not very useful. (Hardware this old is probably not
  fast enough for advanced features anyway.)

  Note: This is more compatible than ANGLE, which requires 9_3 to work
  at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.)

- Hardware decoding with D3D11VA, including decoding of 10-bit formats
  without truncation to 8-bit.

What doesn't work / can be improved:

- PBO upload and direct rendering does not work yet. Direct rendering
  requires persistent-mapped PBOs because the decoder needs to be able
  to read data from images that have already been decoded and uploaded.
  Unfortunately, it seems like persistent-mapped PBOs are fundamentally
  incompatible with D3D11, which requires all resources to use driver-
  managed memory and requires memory to be unmapped (and hence pointers
  to be invalidated) when a resource is used in a draw or copy
  operation.

  However it might be possible to use D3D11's limited multithreading
  capabilities to emulate some features of PBOs, like asynchronous
  texture uploading.

- The blit() and clear() operations don't have equivalents in the D3D11
  API that handle all cases, so in most cases, they have to be emulated
  with a shader. This is currently done inside ra_d3d11, but ideally it
  would be done in generic code, so it can take advantage of mpv's
  shader generation utilities.

- SPIRV-Cross is used through a NIH C-compatible wrapper library, since
  it does not expose a C interface itself.

  The library is available here: https://github.com/rossy/crossc

- The D3D11 context could be made to support more modern DXGI features
  in future. For example, it should be possible to add support for
  high-bit-depth and HDR output with DXGI 1.5/1.6.
2017-11-07 20:27:13 +11:00