Clients should only trust MRENCLAVE values from a non-debug
build. But, as an extra precaution, verify that the remote
enclave is not running in debug mode
This was previously in the Java layer because it only really affects
the server, but it's more consistent to have all verification in the
Rust layer. We do lose the separate exception type for it, though.
Adds a java method for libsignal-server that enables extracting
attestation metrics from serialized evidence and endorsements.
Certificate and endorsement validity periods are exposed, so servers
can track if any attestation material is overly stale.
As part of DCAP attestation, the client provided timestamp is compared
to various pieces of quote collateral to verify that the collateral is
currently valid. Some of this collateral can be fresh enough such that a
client with significant clock skew may see the start of the validity
period in the future.
Allow for 1 day of clock skew, at the expense of collateral expiring
1 day earlier.
The only supported way to target an older glibc is to build against
that glibc; consequently, we need to build on an Ubuntu 16 system (or
similar) to target Ubuntu 16. This requires downloading second-party
versions of Clang and CMake, which are too old in the default Ubuntu
repository, as well as building our own Python.
Do all this in a new Dockerfile based on Ubuntu 16.04. This isn't as
rigorous as the Java "reproducible build" Dockerfile, since we're not
pinning the base image or the repositories we're fetching from, but
it's still an image with the environment and tools we need.
- Skip building for Catalyst in pull request testing, but make up for
it in the "Slow Tests".
- Update the README now that the Arm Mac simulator is a Tier 2 Rust
target.
- Remove workaround for one-time incompatibility with the Arm Mac
simulator.
This regressed when we switched from picky to boring because BoringSSL
accepts either PKCS#1 or PKCS#8 when initializing an RSA private key,
and so the default BoringSSL PKCS#1 serialization wasn't caught. Now
we explicitly request PCKS#8.
- Use the headless variant of the JDK.
- Put most apt-get requirements at the end of the file, so that
tweaking them can make use of Docker's per-RUN line caching.
- Added 'clang' as a build dependency for BoringSSL.
- Drop unnecessary packages:
- apt-transport-https - we're using plain http sources at this time
- build-essential - overkill, we just need 'make'
- gcc-multilib - was used to build OpenSSL for testing,
no longer necessary with the switch to BoringSSL
- openssh-client - was used to clone from GitHub, now unused because
all dependencies are public
And note that the "slow tests" should also be passing before a
release.
(but only if there have been any changes)
For now this is just exercising the Docker build, but I think we
should put some of the CocoaPods testing in here too, if not more of
the regular pull request testing.
Implements (a subset of) Intel's DCAP attestation,
making heavy use of 'boring' for X509 and ECDSA.
Cds2Client is now ready for use!
Co-authored-by: Jordan Rose <jrose@signal.org>
Co-authored-by: Ravi Khadiwala <ravi@signal.org>
This annoying function is implemented separately for each bridge
because it produces two results, and the optimal way of doing that for
each bridge differs.
Symbols are stripped on both iOS and Android by the time the app gets
to a user's device, so spending time (and code size) trying to
symbolicate backtraces is wasted. It's still useful for Desktop and
Server, though.
On Android and iOS, the libsignal library on the device will be
stripped, so backtraces will only ever have addresses. Not
symbolicating saves on code size in the backtrace crate.
Update to a revision of BoringSSL that supports cross-compilation to
AArch64 for both Linux and Windows (from an x86_64 host of the same
OS), and provide the necessary environment variables for the Linux
cross-build.
Otherwise, we can run into paths that exceed the classic Windows path
limit due to the nesting of build systems (GitHub Actions > node-gyp >
Cargo > CMake > Visual Studio). Unfortunately, at least some of Visual
Studio's tools are not long-path-aware.
And tweak the test file to remind that the top-level zkgroup/index.ts
exists, though since we still don't reference most types by name in
the tests this wouldn't have actually caught the oversight.
In the past manually-run GitHub Actions could only be run from a
branch, so specifying a tag to build had to be done explicitly. That's
no longer true, so we can remove that field.
- Combine stable and nightly job definitions in the workflow file
- Build bins along with benches
- Use --all-features for tests and bins and Clippy, to make sure the
maximum amount of code is tested. (If we ever have code omitted when
a feature is turned on, we may want to add more test configuration.)
generate-server-params takes existing server params through stdin
(base64-encoded) and generates randomness for any new keys have been
added since last time. As long as new keys are always added to the end
of ServerSecretParams and ServerPublicParams, this allows updating
zkgroup without breaking existing credentials.
This is a variant of AuthCredential that carries two UUIDs, intended
to be a user's ACI and PNI. Why? Because when you've been invited to a
group, you may have been invited by your ACI or by your PNI, or by
both, and it's easier for clients to treat all those states the same
by having a credential that covers both identities. The downside is
that it's larger (both the data, obviously, but also the zkgroup proof
of validity, unsurprisingly).
AnyAuthCredentialPresentation gains a 'get_pni_ciphertext' method,
which will return `None` for the existing presentations and
`Some(encrypted_pni)` for the new credential. Having a separate
credential type but a common presentation type makes it easier for the
server to handle all possible credentials uniformly.
This term is unnecessary after all (the value of 'z' is already fixed
by the equation "Z = I^z"). We can't remove it from earlier proofs
because that would change the format, but going forward we don't need
it.
Without this, two ByteArray types without any additional operations
are structurally equivalent, and so TypeScript permits passing one as
the other. (Thanks, Fedor!)
Like ProfileKeyCredential, but with an expiration timestamp embedded
in it. This has its own credential type and response type, but uses
the same request type as a "classic" ProfileKeyCredential, and
generates presentations usable with AnyProfileKeyCredential-
Presentation, so that existing server code accepting presentations
will automatically do the right thing.
Adoption for servers:
- Update secret params
- When presentations are saved in group state, use
ProfileKeyCredentialPresentation.getStructurallyValidV1PresentationBytes()
to maintain backwards compatibility with existing clients.
- Add an endpoint to issue ExpiringProfileKeyCredentials
- (future) Remove the endpoint that issues regular ProfileKeyCredentials
Adoption for clients, after the server has updated:
- Update public params
- Start fetching and using ExpiringProfileKeyCredentials instead of
regular ProfileKeyCredentials (the old endpoint will eventually
go away)
- Node: To bring types into harmony, a receipt's expiration time has
been changed to a `number` instead of a `bigint`
This trades speed for size around certain elliptic curve operations in
BoringSSL. We're using boring mostly for verifying certificates, not
the many many curve operations we do on a per-message basis, so for
now the code size is more important.
"RedemptionTime" becomes "CoarseRedemptionTime", highlighting its
measurement in days.
"ReceiptExpirationTime" becomes "Timestamp", highlighting its
forthcoming generalized use beyond receipts and it being the preferred
type going forward.
Upcoming work in `attest` requires additional X509 support, and swapping these libraries
is a negligible impact on binary size. This uses a fork of `cloudflare/boring`, as
we have some additions that haven’t yet been contributed upstream.