The format hasn't changed, so we don't need to bump the version number
for the messages the server sends to recipients.
This is implemented in two places: the Rust side for round-trip
testing, and the Java side for what the server actually does. (Both
are implemented to avoid unnecessary copies and unfortunately the two
aren't conveniently compatible with one another, but it's a simple
implementation anyway.)
This takes advantage of the fact that multiple devices for the same
user will have the same identity key and therefore will use the same
per-recipient SSv2 data anyway.
This commit also enforces (on the server side) that device IDs are in
the range 1..=127 for destinations of a SSv2 message; previously they
were varint-encoded.
When send support is added, the round-trip Rust tests will
automatically start testing this as well.
This adds integration bits for the new webpsan, a WebP image sanitizer -- which
currently simply checks the validity of a WebP file input, so that passing a
malformed file to an unsafe parser can be avoided. The integration pretty much
just leverages the integration work that was already done for mp4san.
Allows a client to request a credential for a backup-id without
revealing the backup-id to the issuing server. Later, the client may use
this to make requests for the backup-id without identifying themselves
to the server.
This will let us (a) avoid hardcoding any particular async runtime in
the libsignal-bridge macros, and (b) separate the platform-specific
stuff from the async runtime. libsignal_bridge now has an AsyncRuntime
trait whose only requirement is "run a self-contained Future".
For now, the "runtime" is spawning a thread that then uses
now_or_never, but eventually this will be a persistent tokio runtime
of some kind.
Also for now, this is only implemented for Java. Swift and Node
support coming soon.
For the most part this should happen transparently without any
explicit adoption, like the previous change, but for Java code the
NoSessionException is now properly declared on SessionCipher.encrypt.
(This was always technically possible, but clients were expected to
have previously checked for session validity before using
SessionCipher; now that there's an expiration involved, that's not
strictly possible.)
And consolidate the implementations of these two separate checks; now
they both check for a valid session by looking for a sender chain
instead of just *some* current session, in addition to the new check
for an expired unacknowledged session. At the Rust level, this is now
one check named has_usable_sender_chain; at the app levels, the old
names of hasSenderChain (Java) and hasCurrentState (Swift, TypeScript)
have been preserved.
Tests to come in the next commit.
Like ProtocolAddresses in 88a2d5c, these APIs will eventually only
support ACIs, so introducing strong types now helps move in that
direction. However, the existing APIs that produce strings have not
been removed yet.
These work the same as the equivalent factory methods on ServiceId,
but throw if the resulting parsed ServiceId doesn't match the specific
type you were trying to parse.
Only the iOS client ever used this extra parameter, and it's one
that's easily stored alongside the reference to a store. This is
massively simpler than having it threaded down to the Rust
libsignal_protocol and back up through the bridging layer.
In a future release ProtocolAddresses will *only* support ServiceIds,
so these APIs are designed to be the nullable version of the signature
they'll eventually have. Since ProtocolAddresses are created by the
client app in nearly all cases, they should be able to ignore the null
case if they only use ServiceIds in their input.
This is a minimal change to not lose information that we already have
in Rust; there may be further changes in the future (such as avoiding
the redundancy now in ProtocolNoSessionException, or splitting out
missing Sender Key sessions, which don't have an address, from missing
Double Ratchet sessions).
The JNI tests have also been conditionalized in case we want to take
this out for Android as well. (Node still unconditionally depends on
it being present.) I've given it a separate feature flag from just
ffi/jni/node so that we can preserve the tests Jessa wrote for each
platform.
This MP4 format "sanitizer" currently only transforms (when necessary) outgoing media on iOS, Android, or Desktop to
make it suitable for streaming playback by the recepient. In the future, it will validate and be able to either repair
or reject outbound AND inbound media, to prevent malformed media from being fed to third party or OS media players.
An generic io module was added to the libsignal rust bridge containing the InputStream trait, modeled loosely after
Java's InputStream, which calls back into the client language to perform reads or skips. This infrastructure could
potentially also be for any other future large data inputs to libsignal functions.
This is very similar to the AuthCredential used by the group server,
but using CallLinkParams to encrypt the user ID rather than
GroupParams (and using GenericServerParams to issue the credential
rather than the group server's ServerParams).
This will allow a user to request to create a call link from the chat
server without revealing anything about the room, and then later
actually create it by giving the room ID to the calling server without
identifying themself.
This involves a new, stripped-down GenericServer{Secret,Public}Params,
which currently only contains a generic "zkcredential" key. Apart from
the calling server not needing to handle all the credentials that the
group storage server supports, the structure of zkcredential means it
is safe to use the same key for multiple kinds of credentials.
Similarly, CallLink{Secret,Public}Params plays the same role as
Group{Secret,Public}Params for encrypting user IDs when talking to the
calling server.
Following from that, the APIs for CreateCallLinkCredentials are
located on the individual types (RequestContext, Request, Response,
Credential, Presentation) rather than all being on the Server*Params
types; adding a new credential type won't change the API of the
Server*Params types at all.
The main Server*Params may make use of zkcredential in the future as
well, but for now it's only for new Signal servers that want to use
zero-knowledge credentials.
- We weren't loading the native library as "signal_jni.dll"
- The Gradle build commands, though still requiring a shell environment,
shouldn't rely on Unix-style #! lines to execute shell scripts
These are intertwined: older versions of Rust don't support the newer
NDK, but the newer Rust can't successfully compile BoringSSL against
the older NDK.
This requires a boring-sys update to find the Android NDK sysroot in
the right place.
Currently the only user of this method is the ProtcolException constructor,
when a UnidentifiedSenderMessageContent is present.
All other instances of ProtocolException use the sender's UUID as sender.
So it would be good to have this consistent.
Also brings this in line with similar methods, like `getSourceIdentifier` on
SignalServiceEnvelope.
Removes AuthCredentialPresentationV1 and PniCredentialPresentationV1
entirely. For ProfileKeyCredentialPresentationV1, there are still
situations where we want to extract the UUID and profile key, so we
continue to support parsing only.
-i (interactive) and -t (allocate a tty) allow the shell running
inside Docker to handle Ctrl-C (^C) and other shell commands, so you
can stop a command in the interactive process you ran it. However,
they only work if the containing shell (the one where you ran `docker
run`) is also interactive with a tty hooked up, so we test for that
first in both scripts that invoke `docker run`, using `test -t`.
--init passes signals from *outside* Docker down to its subprocesses,
so that cancellation from *another* context works for our Docker
images. This includes the Cancel button in GitHub Actions.
This was previously in the Java layer because it only really affects
the server, but it's more consistent to have all verification in the
Rust layer. We do lose the separate exception type for it, though.
Adds a java method for libsignal-server that enables extracting
attestation metrics from serialized evidence and endorsements.
Certificate and endorsement validity periods are exposed, so servers
can track if any attestation material is overly stale.
- Use the headless variant of the JDK.
- Put most apt-get requirements at the end of the file, so that
tweaking them can make use of Docker's per-RUN line caching.
- Added 'clang' as a build dependency for BoringSSL.
- Drop unnecessary packages:
- apt-transport-https - we're using plain http sources at this time
- build-essential - overkill, we just need 'make'
- gcc-multilib - was used to build OpenSSL for testing,
no longer necessary with the switch to BoringSSL
- openssh-client - was used to clone from GitHub, now unused because
all dependencies are public
And note that the "slow tests" should also be passing before a
release.
Implements (a subset of) Intel's DCAP attestation,
making heavy use of 'boring' for X509 and ECDSA.
Cds2Client is now ready for use!
Co-authored-by: Jordan Rose <jrose@signal.org>
Co-authored-by: Ravi Khadiwala <ravi@signal.org>
This is a variant of AuthCredential that carries two UUIDs, intended
to be a user's ACI and PNI. Why? Because when you've been invited to a
group, you may have been invited by your ACI or by your PNI, or by
both, and it's easier for clients to treat all those states the same
by having a credential that covers both identities. The downside is
that it's larger (both the data, obviously, but also the zkgroup proof
of validity, unsurprisingly).
AnyAuthCredentialPresentation gains a 'get_pni_ciphertext' method,
which will return `None` for the existing presentations and
`Some(encrypted_pni)` for the new credential. Having a separate
credential type but a common presentation type makes it easier for the
server to handle all possible credentials uniformly.
This term is unnecessary after all (the value of 'z' is already fixed
by the equation "Z = I^z"). We can't remove it from earlier proofs
because that would change the format, but going forward we don't need
it.
Like ProfileKeyCredential, but with an expiration timestamp embedded
in it. This has its own credential type and response type, but uses
the same request type as a "classic" ProfileKeyCredential, and
generates presentations usable with AnyProfileKeyCredential-
Presentation, so that existing server code accepting presentations
will automatically do the right thing.
Adoption for servers:
- Update secret params
- When presentations are saved in group state, use
ProfileKeyCredentialPresentation.getStructurallyValidV1PresentationBytes()
to maintain backwards compatibility with existing clients.
- Add an endpoint to issue ExpiringProfileKeyCredentials
- (future) Remove the endpoint that issues regular ProfileKeyCredentials
Adoption for clients, after the server has updated:
- Update public params
- Start fetching and using ExpiringProfileKeyCredentials instead of
regular ProfileKeyCredentials (the old endpoint will eventually
go away)
- Node: To bring types into harmony, a receipt's expiration time has
been changed to a `number` instead of a `bigint`
This trades speed for size around certain elliptic curve operations in
BoringSSL. We're using boring mostly for verifying certificates, not
the many many curve operations we do on a per-message basis, so for
now the code size is more important.
Upcoming work in `attest` requires additional X509 support, and swapping these libraries
is a negligible impact on binary size. This uses a fork of `cloudflare/boring`, as
we have some additions that haven’t yet been contributed upstream.
Optimize presentation of credentials (AuthCredentialPresentationV2, ProfileKeyCredentialPresentationV2, PniCredentialPresentationV2). Server will accept V1 or V2 presentations. Clients will produce V2.
Various improvements to FFI to support this, and some minor optimizations (in particular "lazy statics" to avoid redundant loading of SystemParams).
The main advantage here is that we don't need any dependencies from
the unstable repo, which means we can be sure that the glibc version
we build against is suitable for Buster instead of being pulled in
from a later train. (We can't do this for Stretch because Stretch is
too old for all our build tools.)
While here, simplify the build a little bit: we're already using
snapshots of the Debian repo, so drop the separate file for pinned
dependencies.
Previously the project would error out during the configuration stage,
since the Android Gradle plugin requires JDK 11 to even load. Now it
throws an error if you try to build a top-level task or a task in the
Android subproject, but allows you to build, e.g. 'client:test' with
no problems.
This helper task was supposed to only execute when publishing the
client or server artifacts, but at the point where that was checked
the task graph *hasn't been built yet*. Instead, add the task to the
task graph unconditionally, but disable it by default, and have its
dependents enable it only when publishing.
In Java these are subclasses of IllegalStateException, a
RuntimeException, so that every session operation isn't annotated as
throwing InvalidSessionException. Swift and TypeScript don't have
typed errors, so they're just additional specific cases that can be
caught.
...not a generic RuntimeException. Now that it's only used for
SignalMessage MAC keys, the only way it could be wrong is if it's
provided incorrectly by the user.
Rather than have a separate "testable" artifact, always include Mac
and Windows versions of libsignal_jni.so when publishing
signal-client-java *and* libsignal_server (though not when just
building locally).
Also, finally attach these tasks to the correct step (processResources
rather than compileJava).
Reorganize the Gradle build with three targets:
- signal-client-java (client/)
- signal-client-android (android/)
- libsignal-server (server/)
plus an additional shared/ directory for sources shared between
client/ and server/.
This maintains the distinction between signal-client-java (the Java
parts, plus a Linux libsignal_jni.so for running tests outside of the
Android emulator) and signal-client-android (contains the Android JNI
libraries, plus any Android-specific code, which for now is just
AndroidSignalProtocolLogger, which the app doesn't even use).
The new libsignal-server is built very similarly to
signal-client-java, but only contains the Java sources relevant for
the server...plus the base org.whispersystems.libsignal classes from
the original libsignal-protocol-java, because some of them are
referenced directly in our generated Native.java. (We can improve on
this in the future.) The "testable" artifact that includes macOS and
Windows versions of libsignal_jni.so is now only built for
libsignal-server, not signal-client-java; our Android development
happens on Linux, but server development happens on multiple
platforms.
Tests were recently reorganized into a top-level tests/ directory, but
now there's been another reorganization:
- client/src/test/ - tests to run on any clients
- android/src/androidTest/ - tests to run only on Android devices /
emulators (currently none)
- server/src/test/ - tests to run specifically for the server
(currently none)
- shared/test/ - does not exist to avoid running the same tests twice
There are no tests to run "only not on Android devices", and it's
currently assumed that all server functionality is tested by the
client tests. The Android device tests run all the client tests as
well (by direct path reference). This may not be the "best" Gradle
layout, but it's at least straightforward to read the Gradle files.
For now there's still only one native library built for both
signal-client-java and libsignal-server, but that could change in the
future.
- Switch to the modern maven-publish plugin.
- Bump the Android target SDK version to 30 to match the app.
(The minimum is still 19.)
- Bump the Java source compatibility version to 1.8.
- Bump the Android command line tools used in Docker to match the app.
- Bump the JDK used in Docker to OpenJDK 11, matching the app.
- Switch to the androidx testing libraries for emulator testing.
- Drop unused trove4j Gradle plugin.
- Lots of cleanup and refactoring.
The Java and Android targets are set up to both run common tests in
the top-level tests/ directory, which will be useful if we ever want
tests that only run in the Android emulator, or do *not* run in the
Android emulator. However, that top-level folder doesn't need to be a
Gradle module itself.
- Java: on IdentityKeyPair and IdentityKey, respectively
- Swift: on IdentityKeyPair and IdentityKey, respectively
- Node: on IdentityKeyPair and PublicKey; Node doesn't have a separate
IdentityKey API
For convenience, exposes IdentityKeyPair.generate() in Java and Node
as well. (This API already existed in Swift.)
Fingerprint checks are done with a boolean-returning method; the error
is never thrown. Android and iOS aren't using the exception / error
case either.
Previously this was defined in the app layers, because zkgroup's
original codegen didn't support custom exception types. However, we
can now move it to a common implementation in Rust.
- Don't validate sizes ahead of time if the subclass calls
CheckValidContents anyway.
- Move serialize() up to ByteArray. Consequently, make ProfileKeyVersion
*not* a ByteArray, since it serializes as a string.
This is a pretty mechanical translation *except* for
- moving the RANDOM_LENGTH constant out of the obsolete Native class
(libsignal-client has its own) into a new Constants class
- replacing the mocked SecureRandom with a custom subclass; Mockito
was refusing to mock SecureRandom and honestly that's fair
- removing unused classes UUIDUtil and ZkGroupError
- updating to JUnit 4, which zkgroup's tests rely on
These APIs are designed to match the generated "simpleapi" entry
points in the original zkgroup repository, to make it easier to adapt
the existing Java, Swift, and TypeScript code to libsignal-client.
The cbindgen-generated signal_ffi.h now includes constants, so that
the fixed-size arrays used to serialize zkgroup types can use named
constants in Rust. This meant filtering out some constants that were
getting picked up but that should not be included.
Note that this commit makes references to Java exception types that
will be added in a later commit.
This is like signal-client-java, but also contains dylibs for Mac and
Windows for testing purposes. Gradle will automatically fetch these
artifacts from the corresponding GitHub release.
This will be used to build a "testable" signal-client-java.jar that
includes native libraries for macOS and Windows in addition to Linux.
This is something zkgroup already has; in particular it allows
developers working on the server to use the zkgroup APIs even if they
run macOS or Windows on their individual machines.
Previously, we had HKDF-for-session-version-3, which matches RFC 5869,
and HKDF-for-session-version-2, which produced slightly different
results. However, nothing in the current versions of Signal uses
anything but the RFC-compliant version. Therefore, this commit removes
support for version 2 and deprecates the entry points that take a
version:
- Java: The HKDFv3 class is deprecated in favor of static methods on
the HKDF class.
- Swift: The hkdf function that takes a 'version' parameter is
deprecated in favor of a new overload that does not.
- TypeScript: The HKDF class is deprecated in favor of a top-level
hkdf function.
- Rust: The libsignal-protocol implementation of HKDF has been removed
entirely in favor of the hkdf crate.
There are no significant benchmark deltas from this change, and a
minimal code size increase that's the cost for removing our own
implementation of HKDF. The deprecations can be removed as a later
breaking change.
If garbage collection happens at exactly the wrong time, the Java
wrapper around a Rust object (such as SessionRecord) can be finalized
while the Rust object is being used, via its opaque 'nativeHandle'
(address cast as integer). Avoid this by adding a NativeHandleGuard
type that keeps the wrapper alive, as well as a low-level entry point
`Native.keepAlive(...)` that does nothing but serve as a sort of GC
guard, similar to `Reference.reachabilityFence()` in Java 9.
This knocks about 10% off of the built binary for Android (per slice),
to balance out the increased size from the new toolchain and stdlib.
Applying the same `opt-level=s` option for `cargo bench` (on desktop)
gives a roughly 1% slowdown, a trade-off that's worth it.
This dedicated error is thrown when a recipient has a registration ID
that's out of the range used by Signal [0, 0x3FFF]. These IDs cannot
be encoded in the sealed sender v2 format and are not supported, even
though they don't cause any problems for 1:1 messages.
This is useful for PlaintextContent messages (just
DecryptionErrorMessage for now), which can't include a group ID when
sent outside of sealed sender because it would reveal group
membership.
That is, when there's an error decrypting the inner payload of a
sealed sender message, instead of just saving the sender (and more
recently the content hint and group ID), save the whole decrypted
contents of the sealed sender message. This is necessary so that the
app can make a DecryptedErrorMessage from that failed payload.
This is complicated somewhat by the fact that the app also uses the
"short" constructor for the various Protocol*Exceptions, so we have to
keep those working.
This allows a device to know whether it's the one that sent a bad
message, and take action accordingly.
We could have a slightly more typesafe API here by using
ProtocolAddress and extracting the device ID, but that doesn't match
up with getting the device ID out of a sealed sender certificate.
- Default: sender will not resend; an error should be shown
immediately
- Resendable: sender will try to resend; delay any error UI if
possible
- Implicit: don't show any error UI at all; this is something sent
implicitly like a typing message or a receipt
This checks if there is an active sender state using the given ratchet
key, for use with decryption error messages. In this case, the app may
choose to archive the current session, or take even stronger actions
such as fetching new prekeys for the recipient.
The app-visible change is that sealedSenderMultiRecipientEncrypt now
takes a SessionStore as well. Sessions will be looked up in bulk using
a new SessionStore API, 'loadExistingSessions' or
'getExistingSessions`. The registration ID is then loaded from each
session and included in the resulting SSv2 payload.
The implementation is a bit of a divergence from some other APIs in
libsignal-client in that the "look up in bulk" step is performed in
the Java, Swift, or TypeScript layer, with the resulting sessions
passed down to Rust. Why? Because otherwise we'd pass a list of
addresses into Rust, which would have to turn them back into a Java,
Swift, or TypeScript array to call the SessionStore method. This would
be (1) a bunch of extra work to implement, and (2) a waste of CPU when
we already /have/ a list of addresses in the correct format: the
argument to sealedSenderMultiRecipientEncrypt.
This is an example of "the boundaries between the Rust and
Java/Swift/TypeScript parts of the library don't have to be perfect;
they're internal to the overall product". In this case, we've taken
that a little further than usual: usually we try to make the
libsignal-protocol API as convenient as possible as well, but here it
had to be a bit lower-level to satisfy the needs of the app language
wrappers. (Specifically, callers need to fetch the list of
SessionRecords themselves.)
P.S. Why doesn't v1 of sealed sender include registration IDs? Because
for SSv1, libsignal-client isn't producing the entire request body to
upload to the server; it's only producing the message content that
will be decrypted by the recipient. With SSv2, the serialized message
the recipient downloads has both shared and per-recipient data in it,
which the server must assemble from the uploaded request. Because of
this, SSv2's encrypt API might as well produce the entire request.
Registration IDs are used to detect if a device ID has been reused,
since the new device will (with high probability) use a different
randomly-generated registration ID from the old one. The server should
be able to validate this for SSv2 like it does for SSv1, though the
handling of this for SSv1 is in the various apps.