hash
stringlengths 32
32
| doc_id
stringlengths 7
13
| section
stringlengths 3
121
| content
stringlengths 0
3.82M
|
---|---|---|---|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.4.7 Extensions of multi-messages signature schemes
|
The multi-messages signature schemes described in clauses 4.4.1 to 4.4.5 are based on the classic approach for building (Q)EAAs from a set of advanced cryptographic mechanisms such as BBS+, CL or PS-MS signatures. While this approach does support selective disclosure, it comes with the cost of concealing the undisclosed attributes in a zero-knowledge proof whose complexity grows linearly with the number of such attributes. In order to minimize the size of the (Q)EAAs and their verifiable presentations, more elaborate approaches have been proposed for BBS+ and PS-MS, where undisclosed attributes have no impact on the proof size, which is beneficial for selective disclosure. Below are three cryptographic research papers that describes such approaches: • "MoniPoly: An Expressive q-SDH-Based Anonymous Attribute-Based Credential System" [i.240] published by Syh-Yuan Tan and Thomas Gross (2020). • "Efficient Redactable Signature and Application to Anonymous Credentials" [i.232] published by Olivier Sanders (2020). • "Improving Revocation for Group Signature with Redactable Signature" [i.233] published by Olivier Sanders (2021).
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5 Proofs for arithmetic circuits (programmable ZKPs)
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.1 General
|
4.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.1 General
|
Arithmetic circuits can represent any computational logic. Consequently, proofs for arithmetic circuits are "programmable ZKPs": As every statement can be translated into an arithmetic circuit, a ZKP for any statement can be constructed. The programmable ZKPs are often designed and implemented as zk-SNARKs, which are further described in clause 4.5.2.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.2 zk-SNARKs
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.2.1 Introduction to zk-SNARKs
|
The abbreviation zk-SNARK stands for "Zero-Knowledge Succinct Non-interactive ARgument of Knowledge", and is a collaborative term for a specific category of ZKP protocols. At the time of writing (in August 2025), eighteen zk- SNARK protocols have been published by cryptographic researchers; see clause A.4 for a list of all zk-SNARK protocols. The zk-SNARK characteristics can be broken down as follows: • zero-knowledge: As defined earlier, the proof gives no information beyond that the statement is correct, and any information that can be trivially derived from the statement (e.g. a ZKP that the statement that a holder is older than 19 is correct trivially proves also that the holder is older than 18). • Succinct: the proof size grows sublinearly with the statement's size (e.g. logarithmically or even independent of statement size (constant proof size)). • Non-interactive: randomness is not provided by the verifier (but by a random oracle). Consequently, a single message from the prover suffices to convince any verifier. • ARgument: Cryptographic evidence (that relies on some battle-tested computational hardness assumptions such as DLP, as opposed to a full mathematical proof). • of Knowledge: the proof demonstrates the user's knowledge of data (a witness) that proves the statement (not just its existence). NOTE 1: A zk-SNARK system provides selective disclosure, unlinkability, and predicate proofs by design. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 63 NOTE 2: The succinctness property is not necessary for privacy in verifiable presentations. In contrast, as shorter proof sizes and verification complexity typically require more resources on the prover side, and the hardware running the prover is commonly a mobile phone, which is much more resource-constrained than a relying party's server, the succinctness is probably not a desirable property in an implementation of general-purpose ZKPs for digital wallets. This resonates well with more recent and efficient constructions that use (non-succinct) zk-NARKS. Besides proving time, memory requirements can pose an issue on mobile devices. While many zk-(S)NARK constructions have very high memory requirements (scaling with the size of the overall transcript of the witness verification algorithm to be proved), some constructions, such as Ligero, allow for garbage collection and hence scale memory requirements only with the maximum memory required during running the witness verification algorithm. The concept of zk-SNARK was initially described by Alessandro Chiesa et al. in a paper [i.65] in 2012, which in turn was based on Jens Groth's work [i.125] from 2010. The first general or programmable zk-SNARK protocol Pinocchio [i.220] was designed and implemented in 2013. Hence, a zk-SNARK that is correctly executed (e.g. with a C program) can efficiently create specific ZKPs for any statement. There is an important distinction between zk-SNARK proving systems that require a program (circuit)-specific preprocessing. So far, mainly preprocessing SNARKs have been used in practice (blockchain privacy and scaling projects) because they tend to have higher proving performance as they can be hand-optimized to the program. However, for different programs (e.g. patches) the preprocessing needs to be conducted again. On the other hand, so-called zero-knowledge Virtual Machines (zkVMs) can dynamically prove the correct execution of any program (represented by an instruction set received through compilation, e.g. a C or Rust program compiled with LLVM). Promising candidates that would allow the dynamic execution of certificate verification include a zk-WASM (not yet a fully-fledged VM) based on the Ligero [i.7] proof system, called Ligetron. NOTE 3: In the zkVM case, there is also a preprocessing step, but it is only instruction set specific and, therefore, not program-specific. A zk-SNARK protocol can be based on a trusted setup or as a transparent setup, as further described in clauses 4.5.2.2 and 4.5.2.3.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.2.2 Trusted setup of zk-SNARKs
|
The trusted setup of a zk-SNARK involves three algorithms KeyGen, CP, CV as illustrated in Figure 8. Figure 8: Overview of zk-SNARK with trusted setup The key generator KeyGen takes a secret parameter sd (secret data), also called "toxic waste", and the program C for which correct execution should be proven (the statement), and generates two publicly available keys, the user's proving key pk, and the relying party's verification key vk. These keys are public parameters that need to be generated once for a specific program C. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 64 NOTE 1: The parameter sd used in the generator is a secret value. If this parameter is known to an attacker, it can generate fake proofs, i.e. without knowing a valid witness w. In other words, the soundness guarantees of the zk-SNARK would not be satisfied any more. However, the zero-knowledge property is not conditional on the secrecy of sd. In the context of digital attestations, even a citizen that does not trust the entity that ran the trusted setup need not to be afraid of a loss of privacy guarantees. NOTE 2: To make sure that sd cannot be leaked, many projects (particularly on blockchains where whoever runs the trusted setup will unlikely be trusted by everyone), the trusted setup is usually operated in a multi-party computation by many entities, such that sd is only leaked if all of these entities collude. As such, if a verifier trusts only a single entity involved in the trusted setup, soundness of the zk-SNARK system is guaranteed, i.e. no fake proofs can be practically created. NOTE 3: In principle, each relying party (verifier) could run their own trusted setup and distribute the corresponding pk to the holder: If the verifier protects their sd, they do not need to be afraid of receiving fake proofs. However, there are two significant drawbacks: pk tends to be large for practical presentations (tens to hundreds of MB), so real-time distribution is impractical and a pk that all verifiers accept is more desirable (particularly because different presentations correspond to different programs and, therefore, require different pk). Furthermore, as the holder cannot check the setup conducted by the verifier, additional certification of the pk to make sure it is derived from the correct program (and not some other program that outputs more information than stated), allowing a user to trust in the privacy guarantees. The user executes the algorithm CP with the following input parameters: its (static) proving key pk, a (dynamic) public input pd (public data), and a private witness w. The algorithm CP generates the proof value prf = CP(pk, pd, w), as evidence that the user knows a witness w. EXAMPLE 1: The public data pd could be the statement, for example that the user's age is above 18. It will also likely involve a nonce to avoid replay attacks and a set of public keys for accepted issuers against which the signature of the user's attestation (which represents part of the witness) is verified in the zk-SNARK. The verifying relying party calculates the algorithm CV(vk, pd, prf) which returns true if the proof is correct and false otherwise. Hence, the function CV returns true if the user knows a witness w that satisfies the function C(sd,w) = true. EXAMPLE 2: zk-SNARK protocols with trusted setup are Pinocchio [i.220], Geppetto [i.72], and TinyRAM [i.19]. For a complete list of zk-SNARK protocols with trusted setups, see table A.4 in clause A.4. NOTE 4: Most zk-SNARKs with trusted setup actually involve a two-step trusted setup: one that is not dependent on C and a second one that is dependent on C. In 2019, PLONK [i.116] was introduced as a universal zk-SNARK protocol. In this approach, only the first step which is independent of C involves toxic waste that may compromise soundness; and the second, C-dependent step - while involving a computationally intensive preprocessing step - does not involve toxic waste anymore but only relies on the output of the first step. However, the "complexity" of the programs C that can be covered is bounded by the sizes covered by the first step. Universal trusted setup: In 2019, PLONK [i.116] was introduced as the universal zk-SNARK protocol.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.2.3 Transparent setup zk-SNARKs
|
In a transparent (public) setup of zk-SNARK there is no need for a trusted setup. Yet, to achieve succinctness, a computationally and memory-intensive preprocessing step is still required. EXAMPLE: zk-SNARK protocols with transparent (public) setups are SuperSonic [i.198], Hyrax [i.250] and Halo [i.31]. For a complete list of zk-SNARK protocols with transparent setups, see table A.4 in clause A.4. Moreover, hash-based zk-(S)NARK, such as Ligero, are often not succinct but still sublinear in proof size and/or verification time and hence require a transparent setup. NOTE: If the general-purpose ZKP is transparent and not succinct, the transparent setup may be as simple as specifying the cryptographic hash function used in the construction.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.2.4 Cryptography behind zk-SNARKs
|
The cryptography that underpin the zk-SNARK schemes is highly complex and differs from protocol to protocol. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 65 In brief, the zk-SNARK protocols can be constructed based on the following cryptographic building blocks [i.222]: • Fiat-Shamir Heuristics, which in turn can be broken down into Sigma-Protocols, Random Oracle Models (ROM) and Fiat-Shamir-Compatible Hash Functions. • Probabilistically Checkable Proofs (PCP): Merkle Trees and Hash Functions, Kilian Interactive Argument of Knowledge, and Micali's Computationally Sound (CS) Proof. • Quadratic Arithmetic Programs (QAPs) and Square Span Programs (SSPs). • Linear Interactive Proofs (LIPs). • Polynomial Interactive Oracle Proofs (PIOPs). A common construction involves three steps: 1) Arithmetization: Representing the program C as a sequence of simple algebraic operations, such as additions and multiplications. Common representations are Rank-1 Constraint Systems (R1CS), PLONKish, and Algebraic Intermediate Representation (AIR). 2) This representation is translated into one or multiple polynomials, such that knowledge of a witness, corresponding to a valid execution trace of C, corresponds to certain properties of the polynomials (e.g. roots at certain positions or equalities between one polynomial and a product of two other polynomials). Challenging this equality under the assumption of a truthfully answering prover corresponds to an Interactive Oracle Proof (IOP). The IOP is an information-theoretic object, i.e. it does not rely on cryptographic hardness assumptions. Because of the good error-amplification of polynomial encodings following the Schwartz-Zippel lemma (polynomials of low degree in a large field will either be equal or different in almost every point), few spot checks are sufficient, with the corresponding points for the spot checks determined using the Fiat-Shamir heuristic. 3) Using a cryptographic Polynomial Commitment Scheme (PCS), the prover can be forced to answer truthfully to queries of these polynomials (which are not shared by the prover). The PCS is responsible for the transparency properties of the setup (trusted or transparent) and the reason why a "proof" based on a PCS becomes an "argument". NOTE 1: Depending on the IOP and PCS, some zk-SNARKs are not post-quantum secure, i.e. soundness guarantees rely on hardness assumptions such as DLP. As for the toxic waste, the zero-knowledge property is, by contrast, unconditional. NOTE 2: Bulletproofs [i.38] - developed by Bünz et al. - are a family of zk-SNARKs with reduced succinctness properties (proof size is sublinear, but verification time is not). See clause 4.5.4 for more information about Bulletproofs. NOTE 3: zk-STARKs [i.17] and [i.203] - developed by Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, and Michael Riabzev [i.18] - are a family of transparent zk-SNARKs that are plausibly post-quantum secure, i.e. soundness guarantees plausibly hold against an adversary with a quantum computer. They are instantiated with a specific arithmetization (AIR) and IOP-PCS combination (Fast Reed Solomon IOP - FRI) that relies on low-degree testing of polynomials and Merkle trees for opening polynomials on small subgroups. Because of their FRI-based construction, proof sizes of zk-STARKs are around 100 to 1 000 times higher than proof sizes of the shortest zk-SNARKs. See clause 4.5.3 for more information about zk-STARKs. Given the vast literature of zk-SNARK algorithms, a complete description of the cryptography for zk-SNARKs goes beyond the scope of the present document. For further reading about the cryptographic algorithms behind the zk-SNARK protocols, the following papers are recommended: Nitulescu "zk-SNARKs: A Gentle Introduction" [i.205], Petkus "Why and How zk-SNARK Works: Definitive Explanation" [i.222], and Evans "Succinct Proofs and Linear Algebra" [i.107]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 66
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.2.5 Implementations
|
As regards to implementations, zk-SNARK was implemented in 2016 for the blockchain protocol ZeroCash for cryptocurrency ZCash, for which zk-SNARK caters for four different transaction types: private, shielding, deshielding, and public. Hence, zk-SNARK allows the users to determine how much data to be shared with the public ledger for each transaction. The blockchain Ethereum zk-Rollups also utilizes zk-SNARKs to increase its scalability. In doing so, they do not make use of the zero-knowledge property but the succinctness property, so some zk-rollups, in fact, are based on SNARKs and not on zk-SNARKs. Furthermore, zk-SNARKs have been implemented as general-purpose ZKP schemes in combination with existing digital identities, as described in clause 6.5.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.2.6 Cryptographic analysis
|
Whether a zk-SNARK protocol is quantum-safe or not depends on the underlying cryptographic algorithms, as described in table A.4. The zk-SNARK protocols Aurora [i.20], Ligero [i.7], Spartan [i.200], and Virgo [i.273] are considered as plausible quantum-safe (related to soundness), whilst the others in table A.4 are not considered as quantum-safe. It is possible to implement presentations of (Q)EAA using zk-SNARKs that support fully unlinkable attestations. NOTE 1: Succinct proofs can typically be turned into ZKPs quite easily through adding blinding factors, since a succinct proof already eliminates a lot of superfluous information ("there cannot be much sensitive information left"). In the context of the EUDIW, the succinctness property is arguably not very relevant because the complexity of the statement to be proved is low enough to be handled directly by a mobile phone. Hence, it makes a lot of sense to look into programmable ZKPs beyond zk-SNARKs. Yet, because of their limited computational power, the focus of the blockchain project has lied on succinct proofs, such that progress and industry-grade tooling is arguably most advanced there. NOTE 2: It is possible to combine ZKPs based on CL-signatures or BBS(+) with proofs for arithmetic circuits. For instance, BBS can be used for a proof of knowledge of the issuer's signature and reveal commitments to selected attributes. Then, a programmable ZKP (e.g. a zk-SNARK) can be used to prove certain properties of the identity attribute (the preimage of the revealed hash), e.g. to compute a complex predicate. A well-known construction that follows this paradigm is LegoSNARK [i.49], implemented in the context of digital attestations, among others, by dock.io.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.3 zk-STARKs
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.3.1 Introduction to zk-STARK
|
The abbreviation zk-STARK stands for "Zero-Knowledge Succinct Transparent Arguments of Knowledge", and is a collaborative term for a specific category of Zero-Knowledge Proof protocols. The zk-STARK protocols fulfil the criteria of a Zero-Knowledge Proof system, which enables one party (the prover) to prove to another party (the verifier) that a certain statement is true, without revealing any additional information beyond the truth of the statement itself. Furthermore, zk-STARKs are succinct, such that they allow for the creation of short proofs that are easy to verify, and they are transparent, meaning that anyone can verify the proof without needing any secret information. The zk-STARK characteristics can be broken down as follows (based on the initials S-T-ARK) to cater for zero‑knowledge (zk): • Scalable: the prover algorithm is typically implemented with repeated functions (e.g. several hash functions). • Transparent: the prover and verifier keys are generated verifiably in a trustless manner (i.e. without the need of a trusted setup). • ARgument of Knowledge: a proof system that demonstrates the user's knowledge of data (not just its existence). NOTE: A zk-STARK system provides predicate proofs, selective disclosure and unlinkability by design. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 67 The concept of zk-STARK was initially described by Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, and Michael Riabzev in a paper [i.18], 2018. At the time of writing (in August 2025), two zk-STARK protocols have been published by cryptographic researchers: • the zk-STARK protocol [i.17] in 2019; and • Zilch [i.203] in 2021.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.3.2 Setup of zk-STARK
|
Unlike the zk-SNARK frameworks, which in several cases require a trusted setup, the zk-STARK protocols are designed to be used without a trusted setup. Hence, the zk-STARK protocols are considered to be both transparent and universal: a transparent protocol is defined as it does not require any trusted setup and uses public randomness, and a universal protocol is defined as it does not require a separate trusted setup for each circuit.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.3.3 Cryptography behind zk-STARK
|
The cryptography behind the zk-STARK schemes is based on Interactive Oracle Proofs (IOP) with scalable proofs. A Zero-Knowledge system based on IOP (ZK-IOP) [i.17] is a common generalization of the Interactive Proofs (IP), Probabilistically Checkable Proofs (PCP) and Interactive PCP (IPCP) models that were previously introduced for zk‑SNARKs (see clause 4.5.2). The zk-STARK protocols are typically implemented using standard hash functions. As in the PCP model, the IOP verifier does not need to read all prover messages, but can rather query them at random locations; as in the IP model, prover and verifier interact over several rounds. Hence, a ZK-IOP system can be converted into an interactive ARgument of Knowledge (ARK) model, assuming a family of collision-resistant hash functions can be turned into a non-interactive argument in the random oracle model, which is typically realized using a standard hash function. Given the complexity of zk-STARK algorithms, a complete description of the cryptography for zk-STARK goes beyond the scope of the present document. For further reading about the cryptographic algorithms behind the zk‑STARK framework, the following paper is recommended: Ben-Sasson et al. "Scalable, transparent, and quantum‑safe computational integrity" [i.17].
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.3.4 Implementations
|
While the zk-STARKs developed by StarkWare are a prominent instantiation, they have been criticised for relatively low security (Starkware's construction amplifies security by a proof of work [i.241]) - among other reasons because the concrete choice of security level is based on an additional unproven conjecture to allow for fast proving times. Moreover, the FRI-based IOP and the Merkle tree-based PCS scheme are not the only way to construct transparent, post-quantum secure zk-SNARKs. Other protocols also use different IOP paradigms or PCS. For example, Aurora [i.20] is a zk-SNARK based on Interactive Oracle Proofs (IOPs) over Rank-1 Constraint Systems (R1CS), using the sum check protocol. Like the traditional zk-STARKs, it relies on polynomial commitments via Merkle trees, and hence does not require a trusted setup and is considered plausibly post-quantum secure. FRACTAL [i.67] is another transparent zk-SNARK that achieves both post-quantum security and recursive composition efficiency. It is also based on FRI as an IOP but uses different encoding techniques and soundness amplification strategies compared to zk-STARKs. Lastly, while constructions like Spartan and Ligero are not succinct, they still involve relatively small proofs (square root scaling, as opposed to poly-logarithmic) and share the transparency and post-quantum security characteristics of zk-STARKs based on FRI + Merkle-tree based polynomial commitments. These examples show that zk-STARKs represent a well-developed point in a larger design space of transparent and plausibly post-quantum secure zk-SNARKs - underscoring that transparency and post-quantum security are properties that can be achieved in multiple ways, not just via the zk-STARK construction lineage. Potentially, zk-STARKs could replace zk-SNARKs for various applications in the future. For example, zk-STARKs could be used for the privacy and confidentiality of ZeroCash protocol, which is currently implemented with zk‑SNARK. However, zk-SNARKs are roughly 1 000 times shorter than zk-STARK proofs, so replacing zk-SNARKs with zk-STARKs would require more research to either shorten proof length, or aggregate and compress several zk‑STARK proofs using incrementally verifiable computation [i.17]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 68
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.3.5 Cryptographic analysis
|
It makes sense to consider zk-STARKs as a special category of zk-SNARKs because they fulfil the same fundamental purpose - namely, enabling succinct ("scalable"), non-interactive zero-knowledge proofs - but with a distinct set of design trade-offs, particularly in terms of cryptographic assumptions, proof system architecture, and transparency. zero-knowledge Scalable Transparent ARguments of Knowledge (zk-STARKs) differ from traditional zk-SNARKs mainly in that they forgo trusted setup (i.e. they are transparent) and are constructed from information-theoretic rather than algebraic assumptions, relying only on collision-resistant hash functions for designing the polynomial commitment scheme. Since they avoid assumptions such as knowledge of exponent underlying Groth16 or elliptic curve pairings underlying the KZG commitment scheme underlying many of the popular zk-SNARKs, they are also plausibly post-quantum secure (in terms of soundness, as the privacy guarantees are discussed broadly above). These properties position zk-STARKs as a natural subclass of zk-SNARKs, with additional guarantees, rather than a completely separate lineage. Grouping them this way emphasizes their role within the broader zk-SNARK family, defined by succinctness and non-interactivity, while allowing meaningful differentiation based on the underlying cryptographic assumptions and protocols. The zk-STARK schemes are considered as plausible quantum-safe, since they are based on a machinery of hash functions for implementing the IOP. If the used hash functions are designed as QSC, the zk-STARK scheme becomes quantum-safe.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
4.5.4 ZK Bulletproofs
|
In their paper, "Bulletproofs: Short Proofs for Confidential Transactions and More" [i.38], Bünz et al. (2017) introduce a non-interactive ZKP protocol aimed to address the issue of transaction size and verification time in existing privacy preserving protocols. Specifically aiming to improve upon proposals for confidential transactions in cryptocurrencies, bulletproofs support aggregation of range proofs and require no trusted setup. A Bulletproof is a zero knowledge inner product argument. Specifically, it enables a prover to prove the correct computation of an inner product of two vectors a = [a1, …, an], and b = [b1, …, bn] such that v = ⟨a, b⟩ = a1b1 +...+ anbn. The prover can do so optionally hiding the vectors or the inner product result. The verifier receives Pedersen commitments to the input vectors and their resulting inner product, together with a proof, , it can use to verify the commitments and the honest and correct computation. By computing multiple inner products, it is possible to compute proofs for R1CS formatted circuits directly using only EC point addition. The size of the circuit can be limited in many contexts improving the performance of bulletproofs significantly. For instance, in the context of age verification, an 8-bit circuit is enough. Here, the inner product of the bit vector representation of the users age, a, and the power of two vectors equals the users age. And it is easy to see that the inner product v = ⟨[a1, …, a7], [1, 2, 4, 8, 16, 32, 64, 128]⟩ effectively constraints this value to the range ∈[0,255]. The inner workings of a Bulletproof is rather lengthy to detail here due to various optimizations used, but the core building block is the Pedersen commitment. Herein, it is enough to state that using a series of Pedersen commitments, the prover can prove a commitment to a polynomial and its correct evaluation at some value u. This is then used to create a ZKP of polynomial multiplication, which is important because it provides a way to create a ZKP of scalar multiplication as follows: 1) With commitment A to a and b, and commitment V to v, where v = ab, it is possible to prove that A and V are committed as claimed without revealing a, b, or v. 2) Prover choses sL and sR randomly and adds these as linear terms, i.e. a becomes a + sLx and b becomes b + sRx. 3) Multiplication of ab is given by the polynomial multiplication (a + sLx)(b + sRx) = ab + linear term + quadratic term. 4) Verification is then done using the prover supplied commitment A and V. Again, the details are rather lengthy. Suffice to say is that there exists an easy way to create a ZKP of scalar multiplication using a ZKP of polynomial multiplication; specifically by only focusing on the constant terms of the polynomial. Equipped with a ZKP of multiplication, it is easy to create a ZKP for the inner product by changing the coefficients from scalars to vectors, and commitments from scalar commitments to vector commitments. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 69 That is the essence of the Bulletproof ZK inner vector argument. The precise steps look a lot more complex as they include an optimization to create logarithm-sized proofs of knowledge for inner products. Furthermore, when using Bulletproof ZK inner product argument for a range proof, multiple inner products are proven with one proof using a random linear combination. There are also additional constraints to enforce that the user honestly and correctly computes the inner product of a bit vector and the power of two vectors. All of these add complexity. In the context of the EUDIW and evaluating the relational predicate m <x < n in a privacy preserving way, Bulletproof ZK inner product proofs can be used in the following way: 1) The issuer creates the commitment = + where G and B are two EC points with unknown discrete log relationships, v is the value (e.g. user age in days), and is a random blind (shared secret between issuer and user). 2) The issuer or the user creates commitments to the binary vector representation of v (and the associated proof that the vector truly is binary), and commitments to the required vector polynomials. 3) The verifier responds with a random challenge pair (can be made non-interactive using Fiat-Shamir) that the prover uses to combine the three inner products into one. 4) The prover shares Pedersen commitments to the linear and quadratic coefficients. 5) The verifier responds with a random challenge u that the prover uses to evaluate the polynomial. 6) The verifier can now check that the inner product of the binary vector and the power of two vector equals the value commitment V, and that the evaluation is correct and honest. This serves as a proof that the committed value lies in the range [0, 2n). For some applications, range proofs of the form ∈[0, 2 −1] need to be adjusted to instead have some lower positive bound and an upper bound that is not a power of two. This requires shifting the commitment value as follows: • Given a lower bound, l, the lower bound shift can be accomplished by − = ( −) + . The prover can now use the shifted committed value to prove the range ( −) ∈[0, 2). • Given an upper bound,2 −, the upper bound shift can be accomplished by adding to the initial commitment + = ( + ) + . The prover can now use the shifted committed value to prove the range ( + ) ∈[0, 2). • Taken together, a tighter combined range is accomplished as ∈[, 2 −). The two bound proofs can be aggregated into a single proof as described in Bünz et al. [i.38] section 4.3. The primary goal of Bulletproofs is to provide a compact and efficient way of proving the correctness of a transaction, while hiding the specific details of the transaction itself. Bünz et al. [i.38] do mention other uses, including support for arithmetic circuits, verifiable shuffles (i.e. to prove that one list of committed values is a shuffle of another list of committed values), and privacy preserving smart contracts in public blockchains. Each of these uses, however, can be done more efficiently in contexts with different contextual characteristics than those of decentralized cryptocurrencies. It is not immediately apparent if Bulletproofs are relevant for electronic attestations of attributes/person identification data given that more performant options like Hashwires exist. Further exploration or analysis may be needed to fully understand how Bulletproofs could be directly applicable to electronic attestations of attributes or person identification data.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5 (Q)EAA formats with selective disclosure
|
5.1 General The present clause provides an analysis of a set of formats for selective disclosure. The topics for the analysis of each selective disclosure (Q)EAA formats are: • Signature scheme(s) used for selective disclosure and optionally unlinkability, when applicable with references to clause 4. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 70 • Encoding of the (Q)EAAs used for selective disclosure. • Maturity of the (Q)EAA format's specification and deployment. • Cryptographic aspects, more specifically if the cryptographic algorithms used for the selective disclosure (Q)EAA formats are approved by SOG-IS and allows for QSC algorithms for future use. The (Q)EAA formats are categorized according to three of the main cryptographic schemes for selective disclosure: • Atomic (Q)EAA formats, see clause 5.2. These (Q)EAA formats correspond to the (Q)EAA signature schemes described in clause 4.2. • Multi-message signature (Q)EAA formats, see clause 5.4. These (Q)EAA formats correspond to the multi-message signature schemes described in clause 4.4. • (Q)EAAs with hashes of salted attributes, see clause 5.3. These (Q)EAA formats correspond to the multi-message signature schemes described in clause 4.3. NOTE 1: There is also a type of generic JSON container format (JSON WebProofs), which allows for a mix of the selective disclosure signature schemes in clause 4, and is therefore treated as a separate category of (Q)EAA formats. NOTE 2: The proofs for arithmetic circuits (such as zk-SNARKs) do not rely upon (Q)EAA formats per se, as they can prove the correct execution of any credential verification program in zero-knowledge. Hence, proofs for arithmetic circuits are out of scope for the present clause, which describes (Q)EAA formats. However, clause 6.5 describes solutions that are implemented based on a combination of programmable ZKPs (such as zk-SNARKs) with existing credentials (such as X.509 certificates).
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.2 Atomic (Q)EAA formats
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.2.1 Introduction to atomic (Q)EAA formats
|
The concept of atomic (Q)EAAs was introduced in clause 4.2. There are numerous (Q)EAA formats that can be issued with a single claim, so in principle a selective disclosure scheme based on atomic claims can be designed for a variety of types of (Q)EAA formats (ICAO DTCs, IETF JWTs, W3C Verifiable Credentials, X.509 certificates, etc.). Clauses 5.2.2 and 5.2.3 are however focusing in more detail on two (Q)EAA formats that are used for atomic (Q)EAA schemes: PKIX X.509 attribute certificates and W3C Verifiable Credentials.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.2.2 PKIX X.509 attribute certificate with atomic attribute
|
The PKIX X.509 Attribute Certificate (AC) profile is specified in IETF RFC 5755 [i.158]. An attribute certificate may contain attributes that specify group membership, role, security clearance, or other authorization attributes associated with the user. The attribute certificate is a signed set of attributes, although it does not contain a public key. Instead, the attribute certificate is linked to a X.509 Public Key Certificate (PKC), which can be used by the user for authentication. In order to preserve the user's privacy, the X.509 public key certificate may only include a pseudonym in the subject field. The attribute certificates are issued by an Attribute Authority (AA), and they may be issued with a short lifetime and with an atomic (single) attribute. These characteristics make short-lived attribute certificates with atomic credentials suitable for an access control service with selective disclosure features. A description of how to use PKIX X.509 attribute certificates for selective disclosure with an access control system is available in clause 6.2.1. The X.509 attribute certificates are ASN.1/DER encoded as described in IETF RFC 5755 [i.158]. X.509 certificates can be signed by the QTSP using cryptographic algorithms (RSA with proper key lengths or ECC with approved curves) that are published by SOG-IS [i.237]. For future use, the X.509 certificates can be signed with quantum-safe cryptographic algorithms [i.193]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 71 The maturity of X.509 attribute certificates can be considered as high, given that the IETF RFC 5755 [i.158] is a mature PKIX standard.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.2.3 W3C Verifiable Credential with atomic attribute
|
As a preparation for enrolment of W3C Verifiable Credentials with atomic attributes, the EUDI Wallet would need to be equipped with Credential templates for the W3C Verifiable Credentials. The W3C Verifiable Credentials Data Model v1.1 [i.264] distinguishes between a Credential as "a set of one or more claims made by an issuer" and a Verifiable Credential as "a verifiable credential is a tamper-evident credential that has authorship that can be cryptographically verified". Put differently, a Verifiable Credential can be a signed Credential. Hence, the Credential(s) in the EUDI Wallet can consist of templates with the attribute properties that should be used for the enrolment of attribute values. NOTE: The W3C Verifiable Credentials Data Model v1.1 [i.264] is a conceptual data model rather than a specific credential format. In this context of atomic attributes, however, the scope of W3C Verifiable Credentials can be limited to the JWT format. A description of how to use the FIDO standard [i.56] as an authentication protocol in conjunction with Verifiable Credentials with atomic attributes for selective disclosure is available in clause 6.2.2. The encoding of the W3C Verifiable Credentials is specified as JWT or JSON-LD in the W3C Verifiable Credentials Data Model v1.1 [i.264]. W3C Verifiable Credentials can be signed by the QTSP using cryptographic algorithms (RSA with proper key lengths or ECC with approved curves) that are published by SOG-IS [i.237]. For future use, the W3C Verifiable Credentials can be signed with quantum-safe cryptographic algorithms as described in the IETF report on JOSE signatures with QSC algorithms [i.149]. The maturity of W3C Verifiable Credentials can be considered as high, given the wide deployment of issued W3C Verifiable Credentials.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.3 Formats of (Q)EAAs with salted attribute hashes
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.3.1 General
|
The general concept of selective disclosure based on salted attribute hashes is described in clause 4.3. As regards to credentials within this category, there are several noteworthy formats. The formats that are described more in-depth in the present document are: • IETF SD-JWT, which is further described in clause 5.3.2. • ISO mDL Mobile Security Object (MSO), which is elaborated in clause 5.3.3. NOTE: ETSI EN 319 162-1 [i.88] specifies the Associated Signature Containers (ASiC), which is an XML-formatted manifest that binds together a number of hashed file objects into one single digital container. The principle of combining hashed objects in an ASiC manifest is similar to the IETF SD-JWT and ISO mdoc credentials with salted attribute hashes. There are however two main differences: ETSI ASiC is intended for combining file objects in a signature container manifest, whilst IETF SD-JWT and ISO mDL MSO are designed for selective disclosure. Furthermore, the ETSI ASiC hashes are not salted, whilst the hashed attributes in IETF SD-JWT and ISO mDL MSO are salted to cater for unlinkability. Hence, the comparison with ETSI ASiC is observed, but nevertheless out of scope for the present clause. In addition to the above two formats, the present document also includes a mention of disclosure mechanisms based on proof mechanisms detailed in JSON Web Proofs and describes a proposal that relies on Directed Acyclic Graphs (DAG). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 72
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.3.2 IETF SD-JWT and SD-JWT VC
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.3.2.1 IETF SD-JWT
|
To support selective disclosure in JWT or JWS, IETF has specified Selective Disclosure JSON Web Token (SD-JWT) [i.155]. The specification introduces two primary data formats, an SD-JWT which is a composite structure consisting of a JWS plus optionally disclosures, and an SD-JWT+KB which is a composite structure of an SD-JWT and a Key Binding JWT (KB-JWT) that is used as a proof of possession for a private key corresponding to a public key embedded in the SD-JWT. At its core, an SD-JWT is a digitally signed JSON document that can contain salted attribute hashes that the user can selectively disclose using disclosures that are outside the SD-JWT document. This allows the user to share only those attributes that are strictly necessary for a particular service. The technique of SD-JWT is based on salted attribute hashes as described in clause 4.3. Each SD-JWT contains a header, payload, and signature and optionally disclosures. The header contains metadata about the token including the type and the signing algorithm used. The signature is generated using the issuer's private key. The payload includes the proof object that enables the selective disclosure of attributes. Each disclosure contains a salt, a cleartext claim name, and a cleartext claim value. The issuer then computes the hash digest of each disclosure and includes each digest in the attestation it signs and issues. NOTE: The JOSE [i.169] signature format allows for SOG-IS approved cryptographic algorithms [i.237] and QSC algorithms [i.149] for future use. During presentation, a Holder selects the disclosures they want to reveal, if the SD-JWT is bound to a key, produces a proof of possession that also signs over the revealed disclosures (KB-JWT), and presents the composite of SD-JWT and KB-JWT to the verifier. The SD-JWT specification is still a draft, yet SD-JWT has been selected in the ARF [i.71] as the JSON-format for selective disclosure and is in the final stages of the IETF standardization process. A thorough analysis of SD-JWT and how it can be applied for selective disclosure of the PID/(Q)EAA for the EUDI Wallet is available in clause E.1.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.3.2.2 IETF SD-JWT VC
|
While SD-JWT defines the general container format, SD-JWT-based Verifiable Credentials (SD-JWT VC) defines a data format and validation rules to express JSON based Credentials based on SD-JWT. This is a usual pattern where a general container format is defined (e.g. JWT) and based on that container format concrete data formats are defined (e.g. Access Token, ID Token). SD-JWT VC defines a set of mandatory and optional claims that have not to be selectively disclosable to enable different additional features such as a way to resolve additional issuer metadata, a credential type mechanism, and a status (revocation) mechanism. SD-JWT VC does not fundamentally change the underlying mechanisms of SD-JWT, but allows for the creation of a digital credential ecosystem on top of it by adding essential mechanisms for such ecosystems that allow for type based filtering, credential revocation, issuer key discovery, and additional display information.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.3.3 ISO/IEC 18013-5 Mobile Security Object (MSO)
|
The Mobile Security Object (MSO) is specified in clause 9.1.2.4 of ISO/IEC 18013-5 [i.181] and contains the following attributes encoded in a CDDL [i.170] structure: • digestAlgorithm: Message digest algorithm • valueDigests: Array of digests of all data elements • deviceKey: Device key in COSE_Key as defined in IETF RFC 8152 [i.167] • docType: DocType as used in Documents • validityInfo: validity of the MSO and its signature ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 73 The valueDigests are issued as IssuerSignedItems, which are the hash values of the ISO mDL attributes combined with random values (see ISO/IEC 18013-5 [i.181], clause 9.1.2.4). In other words, the MSO is a selective disclosure standard based on salted hashes of attributes (see clause 4.3), where the random values are the salts. The deviceKey contains the mdoc Authentication Key (see clause 7.2.2), which is protected by the user's PIN-code or biometrics (see clause 7.6). The MSO is signed by the mDL Issuer Authority, which is an IACA X.509 CA (see clause 7.2.1.4), and the signature is COSE formatted. NOTE 1: ISO/IEC 18013-5 [i.181], Table B.3 "Document signer certificate" lists the ECDSA curves BrainpoolP256r1, BrainpoolP384r1 and BrainpoolP512r1, which are also approved by SOG-IS [i.237]. NOTE 2: The COSE [i.162] signature format also allows for QSC algorithms [i.149] for future use. An example of an MSO data structure is provided in ISO/IEC 18013-5 [i.181], annex D.5.2. The MSO is stored and protected in the device's SE/TEE. The MSO is included in the mDL Response for the device retrieval flow (see clause 7.2.3). ISO/IEC 18013-5 [i.181] is considered mature, and several device retrieval solutions with MSOs have been deployed in production, for example in a number of states in the US. A thorough analysis of ISO mDL MSO and how it can be applied for selective disclosure of the PID/(Q)EAA for the EUDI Wallet is available in clause 7.2.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.4 Multi-message signature (Q)EAA formats
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.4.1 W3C VC Data Model with ZKP
|
The W3C Verifiable Credentials (VC) Data Model v1.1 [i.264] contains clause 5.8 "Zero-Knowledge Proofs", which describes a data model that supports selective disclosure with the use of Zero-Knowledge Proof (ZKP) mechanisms. The W3C Verifiable Credentials Data Model states two requirements for Verifiable Credentials when they are to be used in ZKP systems: • The Verifiable Credential contains a proof, so that the user can derive a verifiable presentation that reveals only the information that the holder intends to reveal. • The credential definition (if being used) is defined in the JSON credentialSchema property, so that it can be used to perform various cryptographic operations in zero-knowledge. The following cryptographic schemes that support selective disclosure while protecting privacy across multiple presentations have been implemented for the W3C Verifiable Credentials Data Model [i.264]: IRTF CFRG BBS [i.177], CL Signatures [i.42], Idemix [i.136], Merkle Disclosure Proof 2021 [i.259], Mercurial Signatures [i.45], PS Signatures [i.223], U-Prove [i.3] and Spartan [i.234]. More specifically, the W3C Verifiable Credentials Data Model standard includes examples of how to use Camenisch-Lysyanskaya (CL) signatures (see clause 4.4.1) with a W3C Verifiable Credential and a W3C Verifiable Presentation; see examples 24 and 25 in W3C Verifiable Credentials Data Model [i.264] for examples of these data structures. An example of how to combine two W3C Verifiable Credentials into a W3C Verifiable Presentation with selected attributes is shown in Figure 9. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 74 Figure 9: W3C Verifiable Credentials presented using ZKP In Figure 9, selectively disclosed attributes from W3C Verifiable Credential 1 and W3C Verifiable Credential 2 are combined into a W3C Verifiable Presentation. CL-signatures are used in the Verifiable Presentation to create the proofs of knowledge of the original W3C Verifiable Credential signatures.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.4.2 W3C VC Data Integrity with BBS Cryptosuite
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.4.2.1 W3C BBS Cryptosuite v2023
|
W3C BBS Cryptosuite v2023 [i.267] is an experimental draft specification, which defines a set of cryptographic suites for the purpose of creating, verifying and deriving proofs for the IRTF CFRG BBS [i.177] draft signature scheme that specifies BBS+ (see clause 4.4.2.4). The BBS+ signatures are compatible with any pairing friendly elliptic curve, however the cryptographic suites defined in the W3C BBS Cryptosuite specification allow the usage of the BLS12-381 curve for interoperability purposes. NOTE: The W3C draft specification has the title "W3C BBS Cryptosuite v2023", although it describes the BBS+ scheme. The term BBS+ is however used throughout the present document to describe the multi-message signature scheme, whilst the term BBS04 describes the original single-message signature scheme. W3C BBS Cryptosuite v2023 [i.267] can be used in conformance with the W3C Verifiable Credentials Data Integrity v1.0 specification [i.263], which in turn describes mechanisms for ensuring the authenticity and integrity of JSON-LD encoded credentials according to W3C Verifiable Credentials Data Model v2.0, especially through the use of digital signatures and related cryptographic proofs. As a result, the IRTF CFRG BBS signature scheme (clause 4.4.2.4) can be applied on W3C Verifiable Credentials v2.0 and W3C Verifiable Presentations in order to disclose selected attributes, which are signed by the user's proofs without revealing the entire W3C Verifiable Credentials and their original signatures.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.4.2.2 W3C VC Data Integrity with ISO standardized BBS04/BBS+
|
In the present clause it is analysed whether the ISO/IEC standardization efforts of BBS04/BBS+ (see ISO/IEC 20008-2 [i.184], ISO/IEC 24843 [i.185] and ISO/IEC CD 27565 [i.191], clause 4.4.6) are compatible with W3C BBS Cryptosuite v2023 and W3C Verifiable Credentials Data Integrity v1.1. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 75 At the time of writing (August 2025), ISO/IEC 20008-2 [i.184] mechanism 3 is thus far the only ISO standard that specifies the qSDH cryptographic primitives for BBS04. However, ISO 20008-2 [i.184] mechanism 3 is designed for single messages and is therefore neither compatible with W3C BBS Cryptosuite v2023 nor W3C Verifiable Credentials Data Integrity v1.1. It has been proven [i.15] that BBS+ with multi-messages has the same security features as BBS04 with single messages, although BBS+ is not yet standardized by ISO. If the ISO/IEC 24843 [i.185] is approved to standardize privacy-preserving attribute-based credentials schemes, the potentially new ISO standard may include a standardized version of BBS+ that has the potential to be compatible with W3C BBS Cryptosuite v2023 and W3C Verifiable Credentials Data Integrity v1.1. Furthermore, ISO/IEC CD 27565 [i.191] refers to IRTF CFRG BBS (clause 4.4.2.4), whilst W3C BBS Cryptosuite v2023 also refers to IRTF CFRG BBS, so both ISO/IEC CD 27565 [i.191] and W3C BBS Cryptosuite v2023 share IRTF CFRG BBS as a common reference for the BBS+ scheme. Hence, if ISO/IEC 24843 [i.185] and/or ISO/IEC CD 27565 [i.191] will standardize BBS+ according to IRTF CFRG BBS in conjunction with DIF draft "Blind Signatures extension of the BBS Signature Scheme" [i.80], then W3C BBS Cryptosuite v2023 can be enhanced to reference such an ISO standard. In such a scenario, the W3C Verifiable Credential Data Integrity 1.0 specification will refer to an ISO compliant version of W3C BBS Cryptosuite v2023. Finally, the W3C Verifiable Credentials Data Model v2.0 can be deployed with W3C Verifiable Credential Data Integrity 1.0, which is underpinned with an ISO standardized version BBS+. NOTE 1: W3C Verifiable Credentials Data Model v2.0 with JSON-LD encoding has the potential to be underpinned by an ISO standardized version BBS+. NOTE 2: W3C Verifiable Credentials Data Model v1.1 with JWT encoding does not refer to W3C Verifiable Credential Data Integrity 1.0, and can therefore not be supported by an ISO standardized version of BBS+.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.4.3 W3C Data Integrity ECDSA Cryptosuites v1.0
|
The W3C "Data Integrity ECDSA Cryptosuites v1.0" [i.256] specification describes a data integrity cryptosuite for use when generating a digital signature using the Elliptic Curve Digital Signature Algorithm (ECDSA). The data integrity cryptosuites are in conformance with the W3C Verifiable Credentials Data Integrity [i.263] specification. More specifically, selective disclosure is described in generalized terms according to the ECDSA-SD-2023 functions. The function createDisclosureData is used to generate a derived proof. The inputs include a JSON-LD document, an ECDSA-SD base proof, an array of JSON pointers to use to selectively disclose statements, and any custom JSON-LD API options (such as a document loader). The disclosure data object is produced as output, which contains the selectively disclosed fields of the JSON-LD document along with the ECDSA-SD proof.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.4.4 Hyperledger AnonCreds (format)
|
The Hyperledger AnonCreds [i.131] credentials are JSON-formatted according to public AnonCreds objects, which in turn are defined by Schemas, CredDefs, Revocation Registry Definitions and Rev_Reg_Entrys. These objects are published by the issuers to repositories called Verifiable Data Registries (VDRs), which are accessible to users and verifiers to enable presentation generation and verification. AnonCreds can also be issued in accordance with the W3C Verifiable Credentials Data Model. AnonCreds are bound to the user with a non-correlatable secret only known to the user itself called a link secret. The link secret as a blind attribute that is sent to the issuer during credential issuance. The issuer signs every claim (including the blinded link secret) individually, enabling selective disclosure. The Pedersen Commitment is used for the link secret. It means the issuer does not know the exact value of the link secret, and the holder can prove the ownership of credentials to a verifier without disclosing a persistent identifier. A user can link two attestations by generating a proof that the two exponents in the Pedersen Commitments are equal, i.e. they contain the same link secret. The cryptographic signature scheme used by AnonCreds is CLRSA-signatures (see clause 4.4.1), which caters for selective disclosure and full unlinkability. More information about the AnonCreds protocols is available in clause 6.4.1. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 76
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.4.5 Cryptographic analysis
|
The maturity of W3C Verifiable Credentials can be considered as high, given the wide deployment of issued W3C Verifiable Credentials. However, BBS+, CL signatures and ECDSA are not secure against quantum-safe cryptographic algorithms [i.244] (see also clause 9), and they are additionally not standardized by NIST in the US or by SOG-IS in the EU. Furthermore, since AnonCreds are based on CLRSA-signatures, the cryptographic algorithms are not considered as quantum-safe nor SOG-IS approved.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.5 JSON container formats
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.5.1 IETF JSON WebProof (JWP)
|
The JOSE [i.152] standard is a widely adopted container format for JSON-formatted Keys (JWK), Signatures (JWS), and Encryption (JWE). For example, JWTs with JOSE-containers are used by the OpenID Connect standard and by W3C's Verifiable Credentials. However, JOSE is not designed to cater for the growing number of selective disclosure and ZKP schemes. Most of these emerging cryptographic schemes require additional transforms, are designed to operate on subsets of messages, and have more input parameters than traditional signature algorithms. Examples of selective disclosure signature schemes that would benefit from a more flexible JSON container format are: • BBS+ [i.177]; • CL Signatures [i.42]; • Idemix [i.136]; • Merkle Disclosure Proof 2021 [i.259]; • Mercurial Signatures [i.45]; • PS Signatures [i.223]; • U-Prove [i.3]; and • Spartan [i.234]. They adhere to the same principles of collecting multiple attributes and binding them together into a single issued token, which is transformed into a presentation that reveals only a subset of the original attributes, predicate proofs, or proofs of knowledge of the attribute. In order to address these issues, the IETF JSON working group has drafted the JSON WebProof (JWP) specification [i.152]. The JWP specification defines a new JSON container format similar in design to JSON Web Signature (JWS). However, JWS only integrity-protects a single payload, whilst JWP can integrity-protect multiple payloads in one message. JWP also specifies a new presentation form that supports selective disclosure of individual payloads, enables additional proof computation, and adds a protected header to prevent replay and support binding mechanisms. The JWP payload can contain JSON Proof Tokens (JPTs) [i.151]. JSON Proof Token (JPT) is a compact, privacy-preserving representation of attributes. The attributes in a JPT are encoded as base64url-encoded JSON objects, allowing them to be digitally signed and selectively disclosed in the JWP payload. JPTs also support reusability and unlinkability when being used for Zero-Knowledge Proofs (ZKPs). A CBOR-based representation of JPTs is also defined in the JPT draft, called a CBOR Proof Token (CPT). It has the same properties of JPTs, but uses the JSON Web Proof (JWP) CBOR Serialization, rather than the JSON-based JWP Compact Serialization. Furthermore, the JSON Proof Algorithms (JPA) specification [i.150] defines IANA registries for the cryptographic algorithms and identifiers to be used with the JSON Web Proof, JSON Web Key (JWK), and COSE specifications. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 77
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.5.2 W3C JSON Web Proofs For Binary Merkle Trees
|
In hash-based cryptography, the Merkle signature scheme is a digital signature scheme based on Merkle trees and one-time signatures such as the Lamport signature scheme. It was developed by Ralph Merkle in the late 1970s and is an alternative to traditional digital signatures such as DSA or RSA. An advantage of the Merkle signature scheme is that it is plausible quantum-safe. Note that SPHINCS+ [i.238] can be considered an evolution of Lamport signature schemes that is more efficient and also not one-time but that can be used multiple (yet a limited number of) times. The JSON Web Proofs For Binary Merkle Trees specification [i.258] defines a generic encoding of merkle audit paths that is suitable for combining with JWS to construct selective disclosure proofs. The specification is suitable for more generic applications and formats such as W3C Verifiable Credentials [i.264] and W3C Decentralized Identifiers [i.257]. JSON Web Proofs (see clause 5.5.1) are used as formats for the encoding binary merkle trees. Selective disclosure is defined as the same as full disclosure with the exception that the rootNonce is not encoded in the compressed representation. The rootNonce is omitted in order to ensure that a selective disclosure proof does not reveal information that can be used to brute force siblings of disclosed members. Merkle proofs are already being used to provide certificate transparency in IETF RFC 9162 [i.171]. The JSON Web Proofs For Binary Merkle Trees specification [i.258] is however independent of the certificate transparency specification.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
5.5.3 JSON Web Zero Knowledge (JWZ)
|
JSON Web Zero-knowledge (JWZ) [i.141] is an open standard for representing messages proven by zero-knowledge technology. A JWZ message consists of three parts: • Header - defines the features of the JWZ token. • Payload message - contains the message that will be shared with the relying party (verifier). • Signature - represents a zero-knowledge authentication proof. The parts are Base64-encoded and separated by dots in the JWZ message. The JWZ header consists of the following parameters: • alg - this is a zero-knowledge algorithm that is used for proof generation. • circuitId - this is a circuit that is used for proof generation. • crit - describes the list of header keys that the verifier has to support. • typ - is the MIME type of the message. In the JWZ case, it is the protocol type of a packed message application/iden3-zkp-json. The JWZ payload can be any arbitrary message, for example a DIDcomm message in the Iden3 protocol. The JWZ signature is a zero-knowledge proof, which is based on a specific auth circuit. An auth circuit is a programmable zero-knowledge circuit that generates a ZK proof based on a set of inputs of the message. The JWZ zero-knowledge proof could for example use "groth16" as alg and "authV2" as circuitId. The JWZ messages are used with the Iden3 protocol, which is described in clause 6.9. More information about JWZ and complete examples are available in [i.141]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 78
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6 Selective disclosure systems and protocols
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.1 General
|
The present clause provides an analysis of a set of systems and protocols for selective disclosure. The topics for the analysis of each selective disclosure protocol are: • Signature scheme(s) used for selective disclosure and optionally Zero-Knowledge Proofs, when applicable with references to clause 4. • (Q)EAA format(s) for selective disclosure, when applicable with references to clause 5. • Protocol(s) for presentation of the user's (Q)EAAs to a relying party (relying party). • Maturity of the protocol's specification and deployment. • Cryptographic aspects, more specifically if the cryptographic algorithms used for the selective disclosure protocol are approved by SOG-IS and allows for QSC algorithms for future use. The protocols are first categorized according to the four main cryptographic schemes for selective disclosure: • Atomic (Q)EAA protocols, see clause 6.2. These protocols correspond to the (Q)EAA signature schemes described in clause 4.2 and formats in clause 5.2. • Multi-message signature protocols, see clause 6.4. These protocols correspond to the multi-message signature schemes described in clause 4.4 and formats in clause 5.4. • Salted attribute hashes protocols, see clause 6.3. These protocols correspond to the multi-message signature schemes described in clause 4.3 and formats in clause 5.4. • Proofs for arithmetic circuits protocols, see clause 6.5. These protocols correspond to the proofs for arithmetic circuits described in clause 4.5. In addition to the traditional categories listed above, the following systems are described, which are based on a mix of selective disclosure schemes: • Anonymous attribute based credentials systems, see clause 6.6. • ISO mobile driving license (ISO mDL), see clause 6.7.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.2 Atomic attribute (Q)EAA presentation protocols
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.2.1 PKIX X.509 attribute certificates with single attributes
|
An access control system based on PKIX X.509 certificates with atomic attributes is illustrated in Figure 10. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 79 Figure 10: Overview of attribute certificate authorization First, the system is configured by a Certification Authority (CA) that issues a PKIX X.509 public key certificate to a user's wallet. The user has a corresponding private key protected in the wallet, such that the user can be authenticated with the public key certificate. The public key certificate may only contain a pseudonym. The Certification Authority also issues short-lived PKIX X.509 attribute certificates with atomic attributes. The attribute certificates are associated with the public key certificate, and they may be stored in the user's wallet and/or in a central repository. Second, the user authenticates to a relying party (with an access control system) by using the public key certificate. For example, TLS/SSL could be used for this authentication. If the public key certificate only contains a pseudonym of the user, the authentication protocol does not reveal the user's identity. Third, the user's attribute certificate(s) are submitted to the relying party's access control system. The attribute certificate(s) may either be pushed from the client to the relying party, or pulled from the repository by the relying party. For more information about attribute certificate architectures, see the IETF RFC 5755 [i.158]. An alternative design of using attribute certificates for anonymous authorization is described in the paper "A First Approach to Provide Anonymity in Attribute Certificates" [i.23] from 2004. The PKIX X.509 certificates can be signed with SOG-IS approved cryptographic algorithms and allows for QSC algorithms for future use, meaning that the attribute certificate access control solution meets the SOG-IS requirements on cryptographic algorithms.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.2.2 VC-FIDO for atomic (Q)EAAs
|
Another example of a protocol for selective disclosure based on atomic (Q)EAAs is the VC-FIDO [i.56] integration that was invented at Kent University. The used atomic (Q)EAA format is W3C Verifiable Credential, which is described in clause 5.2.3. In order to issue the atomic W3C Verifiable Credentials to an EUDI Wallet, the user needs to be identified or authenticated to a QTSP. The VC-FIDO integration is based on the W3C WebAuthn protocol in the FIDO2 standard. The WebAuthn [i.266] stack is extended with a W3C Verifiable Credentials enrolment protocol, resulting in a client that can enrol for multiple atomic short-lived W3C Verifiable Credentials based on W3C Credential templates. These atomic short-lived W3C Verifiable Credentials can then be (temporarily) stored in an EUDI Wallet, and be combined into a Verifiable Presentation that is presented to the relying party (verifier). Selective disclosure is achieved since the user can enrol for the atomic attributes it needs for a specific use case, and present only those atomic (Q)EAAs to a Relying Party. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 80 The VC-FIDO integration was presented by David Chadwick at SHACK2020 [i.56]. This presentation explains the VC-FIDO architecture diagrams and shows a demo of how the client enrols for three atomic W3C Verifiable Credentials (address, driving license, and credit card) that are combined into a Verifiable Presentation as a parking ticket. The VC-FIDO integration is still a prototype, which is deployed as a pilot at National Health Services (NHS) in the UK. The W3C Verifiable Credentials can be signed with SOG-IS approved cryptographic algorithms and allows for QSC algorithms for future use, meaning that the VC-FIDO solution meets the SOG-IS requirements on cryptographic algorithms.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.3 Salted attribute hashes protocols
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.3.1 OpenAttestation (Singapore's Smart Nation)
|
OpenAttestation, which is part of Singapore's Smart Nation initiative and developed within the GovTech's Government Digital Services, is an open source framework for verifiable documents and transferable records. OpenAttestation allows a user to prove the existence and authenticity of a digital document. It makes use of smart contracts on the Ethereum blockchain to store cryptographic proof of individual documents. As an alternative to using the Ethereum blockchain, OpenAttestation can also be used to create verifiable documents using digital signatures. More specifically, OpenAttestation provides Document Integrity [i.204] based on a target hash of salted attribute hashes. An overview of the OpenAttestation Document Integrity flow is illustrated in Figure 11. Figure 11: Overview of the OpenAttestation scheme The target hash of the digital document is calculated as follows: Sort the selected salted attribute hashes from the previous step alphabetically and hash them all together. To compute the target hash the KECCAK256 algorithm is used. During verification of the digital document, the same exact steps are performed again, and the result is compared to the target hash. If the two hash values match, the digital document integrity is intact. Since the OpenAttestation scheme is based on salted attribute hashes, which can be signed with QSC algorithms, it can be considered as plausible quantum safe.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.4 Multi-message signature protocols and solutions
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.4.1 Hyperledger AnonCreds (protocols)
|
The Hyperledger AnonCreds (Anonymous Credentials) specification [i.131] is based on the open source verifiable credential implementation of Hyperledger AnonCreds that has been in use since 2017. The Hyperledger AnonCreds software stack was initially implemented as a combination of the Hyperledger Aries [i.132] protocols, the Hyperledger Indy [i.134] credentials, and the Hyperledger Ursa [i.135] SDK with features for public/private key pair management, signatures and encryption. Since 2022 all Hyperledger AnonCreds features have been merged in the Hyperledger AnonCreds project. The Hyperledger AnonCreds credential format is described in clause 5.4.4. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 81 Hyperledger AnonCreds are widely deployed, and are for example used by organizations such as the Government of British Columbia, IDunion, and the IATA Travel Pass.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.4.2 Direct Anonymous Attestation (DAA) used with TPMs
|
Direct Anonymous Attestation (DAA) is a cryptographic protocol which enables remote authentication of a trusted computer yet preserving the privacy of the user. ISO/IEC has standardized the DAA protocol in ISO/IEC 20008 [i.184]. The DAA protocol has been adopted by the Trusted Computing Group (TCG) in the Trusted Platform Module (TPM) v2.0 specification [i.242] to ensure the integrity of the computer yet addressing privacy concerns. Furthermore, Intel® has also adopted DAA in the Enhanced Privacy ID (EPID) 2.0 specification. Since the ISO/IEC 20008 [i.184] standard specifies the cryptographic primitives of BBS and PS-MS, DAA is essentially based on BBS and PS-MS credentials with device binding and pseudonyms. The primary scope of a TPM is to ensure the integrity of a computer and its operating system. The purpose is to ensure that the boot process starts from a trusted combination of hardware and software, and continues until the operating system has fully booted and applications are running in a trusted state. A computer that is running in a trusted state can be better controlled with respect to software licences and protection against computer viruses and malware. The DAA eco-system consists of three entities: the DAA Member (i.e. TPM platform or EPID-enabled microprocessor), the DAA Issuer, and the DAA Verifier. The Issuer verifies the TPM platform during the Join step and issues a credential to the platform. The Member presents the credential to the Verifier during the Sign step; the Verifier can, based on a zero-knowledge proof, verify the credential without violating the platform's privacy. The DAA protocol also supports a blocklist such that Verifiers can prevent attestation attempts from TPMs that have been compromised. Furthermore, the DAA protocol splits the signer role in two parts. In brief, a principal signer (a TPM) signs messages in collaboration with an assistant signer (the standard computer into which the TPM is embedded). This split aims to combine the high level of security provided by the TPM, and extend it by using the high level of computational and storage ability offered by the computing platform. Chen et al. have specified the DAA protocol based on an ECC scheme [i.63] using Barreto-Naehrig curves, which is implemented by both TPM 2.0 and EPID 2.0. The DAA protocol standardized in ISO/IEC 20008 [i.184], and implemented according to the TPM 2.0 and EPID 2.0 specifications, is considered mature and has been deployed at computers at a very large scale. Since the DAA protocol is based on an ECC scheme, it is however not considered as plausible quantum safe.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5 Proofs for arithmetic circuits solutions
|
6.5.1 Anonymous (Q)EAAs from programmable ZKPs and existing digital identities
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.1.1 Overview
|
This category is based on the principle of deriving anonymous (Q)EAAs by combining existing digital identities (such as X.509 certificates) with zero-knowledge proofs generated by general-purpose ZKP schemes (such as zk-SNARKs). A generalized model of such systems is described in the paper "Bringing data minimization to digital wallets at scale with general-purpose zero-knowledge proofs" [i.14] by Babel and Sedlmeir. The solution, which can be divided in three phases, is illustrated in Figure 12. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 82 Figure 12: Overview of proofs used with credentials
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.1.2 Setup phase
|
In the setup phase, the issuer generates the issuance key. This could for example be a PKIX CA that issues X.509 certificates, or a PKD compliant CA that issues ICAO eMRTDs. The credential format, revocation scheme, etc., are typically also specified and implemented in this phase. The digital wallet is provided with a witness generation program and a proof generation program, which implements the proofs for arithmetic circuits. Typically, the zk-SNARK circuits are integrated with the digital wallet by using a circuit compiler. The verifier's backend is provided with the server-side circuits of the zk-SNARK scheme, which allows the verifier to validate the ZKPs generated by the digital wallet. The verifier in this scenario is equivalent to a relying party in the eIDAS2 context.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.1.3 Issuance phase
|
During the issuance phase the digital wallet generates a key-pair and submits the public key in a credential request to the issuer. The issuer creates and signs the credential, for example an X.509 certificate, and returns it to the digital wallet where it is installed. The issuance phase can for example be performed as described in the ETSI EN 319 411-1 [i.90] standard for trust service providers issuing certificates.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.1.4 Proof phase
|
The proof phase is initiated by the verifier, who submits a proof request (including a nonce) to the digital wallet. The user selects the credentials to be used for verification, and the digital wallet runs the verification algorithm using the locally stored credentials. The verification algorithm depends on the credentials framework, which could for example be a PKIX CA, ICAO PKD, or SSI type issuer of W3C VCs. The digital wallet also creates a ZKP that this verification algorithm was run correctly, without providing any further information than the statement provided by the verifier. EXAMPLE: If a PKIX CA is used for issuance of X.509 certificates, the validation process should check that the user possesses the private key associated with the X.509 certificate, and that the X.509 certificate is valid (properly signed). The X.509 certificate status can be checked with respect to CA signature, expiry date, and revocation checks using OCSP. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 83 The digital wallet executes the programmable ZKP scheme with the selected credential and its validity as private inputs. The digital wallet generates the witness, proof and public outputs and sends the ZKP result to the verifier. Hence, the digital wallet can use the ZKP scheme to submit the credential's verification result and selected attributes or predicates that need to be disclosed to the verifier. In order for the verifier to trust the verification result, the digital wallet also creates a ZKP that certifies the correct execution of the verification program, yet without sharing any details about the inputs or the results of the credential verification algorithm. Hence, the ZKP scheme can prove that the verification algorithm that was locally executed by the digital wallet resulted in the shared statement. The verifier can use the ZKP to check that the digital wallet has a credential that was indeed issued by a particular CA, and that the user possesses the private key associated with the public holder binding key referenced in the credential.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.2 Cinderella: zk-SNARKs to verify the validity of X.509 certificates
|
The Cinderella project is described in the paper "Cinderella: Turning Shabby X.509 Certificates into Elegant Anonymous Credentials with the Magic of Verifiable Computation" [i.77] by Delignat-Lavaud et al. As indicated by the title, the project is an implementation of how to validate X.509 certificates locally at the digital wallet, and share the results with a verifier by using a ZKP scheme. More specifically, the Cinderella project implemented a new format for application policies by composing X.509 templates, and provided a template compiler that translates C code for validating X.509 certificates within a given policy into an arithmetic circuit that allows for the generation of proving and verification programs. In order to produce a zero-knowledge verifiable computation scheme based on the Pinocchio [i.220] zk-SNARK, the Geppetto [i.72] cryptographic compiler was used. The Cinderella project was evaluated by two real-world applications: a plug-in replacement for certificates within TLS [i.159], and access control for the Helios [i.1] voting protocol. Fine-grained validation policies were implemented for TLS with revocation checking and selective disclosure of certificate contents, which turn X.509 certificates into anonymous credentials. For Helios, additional privacy and verifiability guarantees for voters equipped with X.509 certificates were obtained, such as those currently available from certain national ID cards. Rather than modifying the TLS standard and implementations, the X.509 certificate chains communicated during the TLS handshake were replaced with a single X.509 pseudo-certificate that carries a short-lived ECDSA public key and a proof that this key is properly signed with a valid RSA certificate whose subject matches the peer's identity. Also OCSP stapling can be communicated via the Cinderella version of TLS. National eID smartcards with X.509 certificates issued in Belgium, Estonia, and Spain have been evaluated with the Cinderella version of TLS. One immediate issue is proving performance. Since the resulting Cinderella pseudo-certificates can take up to 9 minutes to generate for complex policies on a computer, it is recommended that they are generated offline and refreshed typically on a daily basis. Once the setup is configured or refreshed, online verification of the Cinderella pseudo-certificates and their embedded proof takes less than 10 ms. Yet, progress in zk-SNARK proving performance - e.g. lookup table with PLONKish arithmetization, assembly provers for mobile platforms, and tolerance of "bigger" proofs (hundreds of kilobytes) would arguably make a re-implementation of Cinderella practical on mobile phones, with proving times in the low double-digit seconds range. NOTE: A vulnerability [i.130] in the Geppetto compiler that was found later would also require another toolchain to compile C-code to a ZKP (e.g. zk-SNARK) proving and verification algorithm.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.3 zk-creds: zk-SNARKs used with ICAO passports
|
The zk-creds protocol was introduced in the paper "zk-creds: Flexible Anonymous Credentials from zkSNARKs and Existing Identity Infrastructure" [i.231] by Rosenberg et al. The zk-creds protocol uses programmable ZKPs in the form of zk-SNARKs to: • Remove the need for credential issuers to hold persistent signing keys. Instead, credentials can be issued to a bulletin board instantiated as a transparency log, a Byzantine system, or a blockchain. • Convert existing identity documents into anonymous credentials without modifying documents or coordinating with their issuing authority. • Allow for flexible, composable, and complex identity statements over multiple credentials. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 84 The second use case has been implemented by generating ZKPs of ICAO compliant eMRTDs (passports) to create anonymous credentials for accessing age-restricted videos. More specifically, the eMRTDs were NFC-enabled and issued by the US State Department, which signs a hash tree of the eMRTD data with a raw RSA signature. The ZKP is essentially generated based on the eMRTD's Data Group 1 (DG1), which contains the textual information available on the eMRTD's data page and the Machine Readable Zone: name, issuing state, date of birth, and passport expiry. It is worth mentioning that the ZKP-based verification of the RSA signature is not a standard part of the zk-creds construction owing to the process of reissuance to a bulletin board with more ZKP-friendly primitives.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.4 Anonymous credentials from ECDSA
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.4.1 Overview of the research paper
|
Similar to the Cinderella research paper by Delignat-Lavaud [i.77], Matteo Frigo and Abhi Shelat have designed circuit-based (general-purpose) zero-knowledge proofs to construct proofs for the correct verification of digital certificates compatible with legacy formats and hardware-based key storage. Frigo's and Shelat's results are published in the research paper "Anonymous credentials from ECDSA" [i.113]. While Delignat-Lavaud [i.77] focused on X.509 certificates and RSA-based holder binding, Frigo and Shelat consider the mdoc standard with ECDSA-based holder binding. As for the Cinderella paper, the main challenge Frigo and Shelat need to resolve is that the circuits for verifying SHA256 hashing, ECDSA signatures, and parsing (ASN.1 versus CBOR) have a substantial number of constraints and, therefore, require high compute and memory resources on the prover side. For instance, implementing the verification circuit in Circom (see also Crescent in clause 6.5.5 and FIDO-AC in clause 6.6.5) would correspond to several million constraints and a proving time on the order of a few minutes when implemented on mobile phones via the rapid snark assembly prover. To overcome this issue, Shelat and Frigo introduce a variety of optimizations, including a novel ZKP system that combines the sum check protocol and Ligero proof system (which is known to be very memory-efficient) [i.7] and the avoidance of simulating foreign arithmetic (e.g. the P256 field) in the circuits. Thereby, Anonymous credentials from ECDSA achieve a more than 100x speedup for proving an ECDSA signature, which gets it to within one order of magnitude overhead compared to generating BBS proofs. Moreover, it achieves a 20x speedup of SHA256 hashing compared to Ligero, and similar performance as Binius [i.79], a recent ZKP system that is heavily optimized for binary operations. Through these optimizations, the main bottleneck of verifying MDOC presentations is the hashing of the MDOC document. The proof of possession of a valid MDOC presentation, including a few simple predicates (expiration check, age proof) takes around 1,2 seconds on a high-end mobile phone (such as a Google Pixel 6). While the implementation does not implement revocation checks, the presentation of the paper [i.113] at the Real World Crypto (RWC) conference included an outlook how this could be implemented efficiently via signed pairs of adjacent revocation IDs for revoked certificates, leveraging the efficiency of verifying ECDSA signatures compared to hashing. However, through the performance improvements achieved for SHA256 hashing, also other approaches, such as constructing an accumulator in the form of a SHA256-based Merkle tree over a bitstring-based revocation registry (Babel and Sedlmeir [i.14]), would become practical, as the amount of data to be hashed in a corresponding proof would be comparable to the MDOC. The construction by Frigo and Shelat has met great interest by the scientific community and led to standardization efforts as well as attempts to further improve on proving time by reducing the hashing operations in MDOCs (particularly considering that with ZKPs, selective disclosure via salted hashes becomes obsolete). However, while their optimizations make use of several well-established mathematical constructions in ZKP research, it is nevertheless a highly complex construction leveraging multiple complex optimizations that may take standardization and certification bodies a long time to familiarize with. On the other hand, it is probably fair to say that as of now, their work by far remains the most performant in terms of proof generation for legacy-compatible credentials. Nevertheless, it should be mentioned that compared to other approaches based on circuit-based ZKPs, such as the Iden3 protocol, the ZKPs have substantially higher proof sizes and, therefore, impose a higher verification effort for servers. However, this well reflects the typical case for available resources on the holder and relying party side, Moreover, other benefits, such as transparent setup and plausible post-quantum security which are not met by constructions based on, e.g. the Groth16 proof system, arguably by far outweigh this aspect in the bilateral electronic identification use case. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 85
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.4.2 Implementation and standardization
|
The results in the publication "Anonymous credentials from ECDSA" [i.113] can be applied on the ISO/IEC 18013-5 [i.181] mobile driving license (ISO mDL). The ISO mDL presentation protocol is modified so that the user instead produces a zero-knowledge proof, which proves that their mdoc verifies with respect to the requested attributes. The following public attributes are shared from the EUDI Wallet's ISO mDL application with the relying party: the public key of the identity-issuer, an attribute value to disclose (such as the "age over 18" boolean), the name of this attribute (as specified in the MSO), the liveness transcript, and the tnow parameter to verify that the mdoc has not expired. The rest of the mdoc values in the statement are hidden, in the sense that the relying party is convinced that such values exist, but does not learn them through the interaction. This zero-knowledge argument (ZKARG) implementation for the ISO mDL application is subject for standardization within the ISO/IEC JTC 1/SC 17 WG10 ZKP. In addition to the standardization initiative in ISO/IEC JTC 1/SC 17 WG10, IETF CFRG has published the draft specification "libZK: a zero-knowledge proof library" [i.147]. The IETF CFRG draft specification specifies the ZK proof system that is described by Frigo and Shelat in "Anonymous credentials from ECDSA" [i.113]. This ZK proof system consists of two major components: the outer proof is a Ligero ZKP that checks a property on a committed transcript, whilst the committed transcript corresponds to a proof for a bespoke verifiable-computation scheme. The document [i.147] defines an algorithm for generating a succinct non-interactive zero-knowledge argument that for a given input x and a circuit C, there exists a witness w, such that C(x,w)=0.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.5 Crescent: Stronger Privacy for Existing Credentials
|
Similar to the work by Frigo and Shelat [i.113], Paquin et al. [i.219] argue that compatibility with legacy credentials is essential for fast adoption, that circuit-based ZKPs can achieve such compatibility, and that proving time remains the key challenge of circuit-based ZKPs, particularly on mobile phones. This work considers two legacy credential formats, JWTs and mdoc. The key trick applied by Paquin et al. [i.219] follows a similar idea as discussed already in the Cinderella paper [i.77], namely, to separate parts of the statements to be proved that are repetitive (e.g. verification of the signature on the certificate) and one that changes in every interaction (e.g. verification of the holder's signature on the relying party's unique challenge, as well as verification of non-expiration according to a fresh timestamp supplied by the relying party). The circuit size for the one-time creation of the pseudo-certificate is similar in size as the circuit in Cinderella [i.77] and as discussed by [i.14] - around 2 million R1CS constraints, dominated by hashing and ECDSA verification. Taking aside the one-time work to create a proof of the repetitive statements (the result is often called a pseudo-certificate, which include a derived certificate as well as a ZKP of its correct derivation), the work for the user (prover) that needs to be performed for every verification process reduces to around 20 ms by making use of a sigma-protocol for possession of an ECDSA signature where the committed public key coincides with the one committed in the pseudo-certificate. In terms of combining sigma-protocols with zk-SNARKs, the approach shares some ideas with LegoSNARK [i.49], yet the use is different as LegoSNARK would use circuit-based ZKPs for more complex predicates that cannot be covered in BBS credentials, such as binding to an ECDSA key. As such, there are fewer performance optimizations necessary, and the implementation relies on Circom [i.68], which has seen several years of adoption and auditing in blockchain projects (see also clause 6.9 about the Iden3 protocol). While the Groth16 proofs supported by Circom involve a trusted setup and are not post-quantum secure, they are shorter (around 1 kB) and faster to verify (around 15 ms) than the ones constructed by Frigo and Shelat [i.113]. For hardware binding, Crescent makes use of recent work by Woo et al. [i.253], which is itself an optimized circuit-based ZKP that accelerates proof generation for ECDSA-based signatures by almost an order of magnitude. If further acceleration is needed, Woo et al. also devise a pre-computation that leaves only a sigma protocol involving the verifier's challenge to be executed. As such, the construction can achieve similar performance to creating BBS proofs. While all components used to achieve this performance are quite well established, their composition nevertheless introduces a substantial amount of complexity that is difficult to compare with, e.g. the construction by Frigo and Shelat [i.113]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 86
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.5.6 Analysis of systems based on programmable ZKPs
|
The protocols that combine general-purpose ZKP schemes and digital identities provide some valuable characteristics: • The existing digital identity infrastructures can be re-used as is, more specifically the eIDAS2 framework of X.509 certificates. This covers secure hardware for issuers' signing keys, secure hardware in mobile phones as commonly used with FIDO2. In particular, the issuance process would not need to be changed at all if the hardware attestation chain for the holder binding keypair is checked by the issuer in this step (which should usually be the case). • The existing validation algorithm and revocation checking schemes can be executed in the digital wallet. That is, all the checks that the verifier usually does (verification of the (Q)EAA's signature, holder binding, expiration date, OCSP status or signature on the CRL and inclusion/exclusion of the (Q)EAA, etc.) will now be executed in the wallet app, and only the result of the verification, the explicitly requested attributes, and a ZKP of the correctness of these results will be shared with the relying party. NOTE 1: Some of the "comparison" values, such as the challenge for holder binding check or threshold values for range proofs (expiration, age), may need to be communicated from the verifier to the prover, and then be disclosed as public output of the ZKP. • Only the relevant information about the credential's validity and selected attributes or predicates need to be shared with the verifier because the holder also shares a zk-SNARK of correct local verification with the verifier. • Both the credentials and zk-SNARK protocol can be designed with cryptographic algorithms that are plausible quantum-safe. • Features such as very general predicates (e.g. proof of location within a certain region based on coordinates included in the (Q)EAA or verifiable pseudonyms derived from the holder's public key) and designated verifier proofs that can improve both security and privacy guarantees are easy to implement. Furthermore, equality proofs across different (Q)EAA can help achieve consistency (i.e. that all (Q)EAA belong to the same subject or wallet, without the need to disclose a subject- or wallet-specific cryptographic identifier. Another interesting predicate is membership of the issuer among a list of trusted issuers, as this can further improve herd anonymity if multiple issuers use the same (Q)EAA format. Lastly,certificate chaining falls into this category: A holder can prove the validity of a certificate chain, only disclosing the issuer of the top ("root") certificate. In a large-scale system of (Q)EAA, where, e.g. every municipality can issue (Q)EAA yet the permissions to do so are managed on a federal / state level, this can substantially contribute to unlinkability. • General predicates may be particularly useful to facilitate data-minimizing (Q)EAA issuance. For instance, a holder could ask an issuer to bind a new (Q)EAA to the same public key and name as another (Q)EAA previously presented to the issuer, without disclosing the raw name or public holder binding key. This addresses a key issue in the AnonCreds' Link Secret, which relies on the honesty of holders at the time of issuance. • Designated verifier properties that are challenging to achieve concurrently with unlinkability and non- interactiveness can be easily implemented. Designated verifier proofs allow the holder to make sure that only the designated recipient is convinced of the correctness of the verifiable presentation, mitigating risks of monetization of sensitive, attested (Q)EAA and of man-in-the-middle attacks. However, the anonymous credential schemes described in the present clause are still under research and development, and have not been deployed at scale. Hence, the maturity can be considered as low, although they provide a promising option for zero-knowledge proofs for the future of eIDAS2 and the EUDI Wallet. Moreover, yet, arithmetic circuits for commonly used cryptographic primitives, such as SHA256, RSA, and ECDSA are very complex and involve higher proving times than common digital signature schemes such as ECDSA. Proving time may be even worse for lattice- based post quantum secure digital signatures. The programmable ZKP systems that are most mature (zk-SNARKs) add some pronounced tradeoffs, e.g. the generality of preprocessing versus performance aspects. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 87 NOTE 2: Among the key benefits of general-purpose ZKPs is their flexibility to adapt to legacy formats and cryptographic constructions, and to be able to flexibly accommodate future updates (e.g. switching to plausibly post-quantum secure signature schemes for holder binding and issuance). Nevertheless, as the proving resources are still considered a bottleneck, it may make sense to make minor modifications to existing certificate formats (e.g. to reduce the number of hashing operations). Another approach that has sometimes been brought up is using formats (e.g. salted Merkle trees) that are efficient both within a general-purpose ZKP but also can be used in a "hybrid" scheme that only facilitates selective disclosure in a transition period where low-end mobile devices do not have enough computational power to generate ZKPs with low latency and, thus, a high degree of usability. NOTE 3: As a generalization of ZKPs, Multi-Party Computation (MPC) could also facilitate data-minimizing (Q)EAA presentations. Yet, ZKPs can be considered more mature, and as a specific form of MPC, it is also unlikely that a general-purpose MPC protocol applied to the specific case of a (Q)EAA presentation will be more performant than the best general-purpose ZKP constructions. On the other hand, (Q)EAA presentations based on fully homomorphic encryption are conceivable, which relies on quite different cryptographic constructions. Yet, thus far, his direction seems to have received little research. NOTE 4: The high degree of complexity of general-purpose ZKP construction, paired with its still fast progress may pose a substantial barrier to its standardization. Even if regulators cannot agree on the certification of a ZKP scheme, the private sector may do so with the "layered" approach that can accommodate existing (Q)EAA formats. For an analysis of the corresponding accountabilities, see [i.228].
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.6 Anonymous attribute based credentials systems
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.6.1 Idemix (Identity Mixer)
|
The Idemix (Identity Mixer) technology [i.136] was invented by IBM® Research in 2008. The Idemix system caters for strong authentication that is privacy preserving based on Attribute Based Credentials (ABC). In summary, the Idemix scheme contains two protocols: Issuing the credential to a user and presenting it when accessing a relying party. An overview of the Idemix ABC scheme is illustrated in Figure 13. Figure 13: Overview of the Idemix ABC scheme The Idemix system supports selective disclosure based on unlinkable Zero-Knowledge Proofs, such that users can prove that they are over 18 years old without revealing their name or birthdate. Idemix uses the pairing-based CL-signature scheme (clause 4.4.1) to prove knowledge of a signature in a Zero-Knowledge Proof. NOTE 1: CL-signatures are not SOG-IS approved and not plausible quantum-safe. The Idemix solution has been implemented by IBM® Identity Mixer [i.136], Hyperledger Fabric [i.133], Radboud University Nijmegen's IRMA project [i.227], and the EU-project PrimeLife [i.224]. The Idemix system was also selected as an ABC solution by the EC-funded project Attribute based Credentials for Trust (ABC4Trust) [i.137]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 88 NOTE 2: Idemix is similar to the U-Prove (see clause 6.6.2) in the sense that both protocols are based on privacy-preserving ABC technology, although the iterations in the issuance phase and the underlying cryptographic algorithms differ. NOTE 3: Idemix caters for multi-show unlinkability, whilst U-Prove does not [i.226]. The Idemix ABC system has been formalized by Camenisch et al. in the paper "A Formal Model of Identity Mixer" [i.46] and the Idemix revocation mechanisms are discussed by Lapon et al. in the paper "Analysis of Revocation Strategies for Anonymous Idemix Credentials" [i.196].
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.6.2 U-Prove
|
The U-Prove scheme is based on Attribute Based Credentials (ABC), which in turn relies upon Stefan Brand's cryptographic research on selective disclosure and blinded signature schemes in the book ''Rethinking Public Key Infrastructures and Digital Certificates; Building in Privacy'' from 2000 [i.33]. Brands founded a company to implement the U-Prove ABC scheme, and this company was later acquired by Microsoft®. In 2013, Microsoft® Research released the Identity Metasystem with support for U-Prove ABC to cater for anonymous credentials [i.201]. The U-Prove ABC system was also selected by the EC-funded project Attribute based Credentials for Trust (ABC4Trust) [i.137]. In summary, the U-Prove scheme contains two protocols: issuing the credential to a user and presenting it when accessing a relying party. The U-Prove scheme is illustrated in Figure 14. Figure 14: Overview of the U-Prove ABC scheme The U-Prove issuing protocol is performed between the issuer and the user. The objective of this protocol is for the user to receive a credential, such that it can later present a selected set of attributes to access a relying party. The issuer basically applies a blind signature to the credential with attributes. In other words, the issuer verifies the validity of the attributes and applies a signature without seeing the resulting signature. Since the issuer does not store the result of the issuing protocol, the user cannot be tracked when using the credential, i.e. the processes of issuing and presenting are unlinkable. The U-Prove presentation phase is based on a selective disclosure protocol between the user and the relying party. Based on the relying party's presentation policy, the user selects those attributes that it is willing to present from the issued credential. All the other attributes can be proved by the user to be unchanged in the credential. By the end of the interaction the relying party receives a presentation token with all the revealed attributes and the intact issuer's signature on the whole set of attribute values. NOTE 1: U-Prove is similar to the Idemix (see clause 6.6.1) in the sense that both protocols are based on privacy-preserving ABC technology, although the iterations in the issuance phase and the underlying cryptographic algorithms differ. The U-Prove scheme is based on the DLP and the credentials are issued as DLREP-based certificates as well as for RSAREP-certificates. NOTE 2: Since U-Prove is based on algorithms using the DLP, the scheme cannot be considered as quantum-safe. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 89
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.6.3 ISO/IEC 18370 (blind digital signatures)
|
The ISO/IEC 18370 series [i.183] standardize blind digital signature protocols. Whereas, ISO/IEC 18370-1:2016 describes an overview of blind digital signature solutions, ISO/IEC 18370-2:2016 specifies discrete logarithm based mechanisms. More specifically, section 8 of ISO/IEC 18370-2:2016 specifies a DLP-based blind signature protocol with selective disclosure capabilities. Actually, mechanism 4 described in section 8 of ISO/IEC 18370-2:2016 is a standardization of Microsoft® U-Prove anonymous credential system (see clause 6.6.2 of the present document). Since ISO/IEC 18370 [i.183] is an international standard, which has the potential status to be referenced by EU regulations. This begs the question if ISO/IEC 18370 [i.183] could serve as a standardized selective disclosure protocol for the EUDI Wallet. There are however two critical issues associated with ISO/IEC 18370 [i.183]. The first critical issue with mechanism 4 described in section 8 of ISO/IEC 18370-2:2016 (i.e. U-Prove) is that it does not provide multi-show unlinkability. In other words, it is only possible to present a U-Prove credential once, thereafter additional presentations of the U-Prove credential are linkable. The second issue is that the U-Prove scheme is broken under certain conditions, as described in the article "On the (in)Security of ROS" [i.22]. Provided that the U-Prove issuance protocol is executed concurrently, it is possible to forge a U-Prove credential. However, U-Prove will remain secure if the issuance protocol is only executed sequentially, but this would not be practical nor user-friendly. Since ISO/IEC 18370-2:2016 is based on algorithms using the DLP, the scheme cannot be considered as quantum-safe. Hence, the ISO/IEC 18370 [i.183] standard on blind signatures is not recommended to be considered as a selective disclosure protocol for the EUDI Wallet.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.6.4 Keyed-Verification Anonymous Credentials (KVAC)
|
The anonymous credentials systems Idemix (clause 6.6.1) and U-Prove (clause 6.6.2) are based on public key primitives. A different approach, that is based on algebraic Message Authentication Codes (MACs) in prime-order groups, was proposed by Chase et al. in the paper "Algebraic MACs and keyed-verification anonymous credentials" [i.59]. The paper describes two anonymous credentials systems called "Keyed-Verification Anonymous Credentials (KVAC)" as they require the verifier to know the issuer secret key. The KVAC system is based on two algebraic MACs in prime-order groups, along with protocols for issuing credentials, asserting possession of a credential, and proving statements about hidden attributes (e.g. the age of the user). The performance of the KVAC schemes is comparable to U-Prove and faster than Idemix. However, the presentation proof, for n unrevealed attributes, is of complexity O(n) in the number of group elements. In order to address the complexity issue, a new KVAC system has been designed that provides multi-show unlinkability of credentials and is of complexity O(1) in the number of group elements. This enhanced KVAC scheme was described by Barki et al. in the paper "Improved Algebraic MACs and Practical Keyed-Verification Anonymous Credentials" [i.15]. A new algebraic MAC_BBS+ scheme based on a pairing-free variant [i.50] of BBS [i.27] is also described in the paper. This KVAC system is suitable for resource constrained environments like SIM-cards, and MAC_BBS+ has been implemented as a prototype on standard SIM-cards. Only the verification process differs between the MAC_BBS+ and BBS+ versions but all other operations remain the same (such as credentials issuance and generation of verifiable presentations). The MAC_BBS+ signatures are therefore equivalent to BBS+ signatures for the KVAC system as a whole. Hence, the verification of a MAC_BBS+ verifiable presentation can be done more efficiently and without pairings, provided that the verifier and the issuer are the same entity and therefore share the issuance private key. This could for example be the case for instance in e-voting or public transportation use cases, where the voting authority respectively public transportation authority manages the virtual ballot box server respectively turnstiles/validators. The BBS+ variant of the KVAC system, which can be seen as the public-key variant of MAC_BBS+, is described in clause 4.4 in the paper "Improved Algebraic MACs and Practical Keyed-Verification Anonymous Credentials" [i.15]. The main drawback of KVAC systems is that they are tailored to specific settings in which the issuer also acts as a verifier, as in the case of e-government or public transportation. They are not suited to the more general setting, envisioned in eIDAS2, in which the issuer and the verifier are two distinct entities that do not necessarily share the issuance private key. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 90 To address this issue, a new KVAC system, called BBS# [i.78], has been designed to provide publicly verifiable and multi-show anonymous credentials from pairing-free elliptic curves. In addition, BBS# allows a credential to be bound to a hardware-protected device key without requiring any change in that hardware or in the algorithms it supports. It leverages MACBBS+ (as described in [i.15]), thus its given name. In contrast to conventional (pairing-free) KVACs, BBS# enables publicly verifiable showing of credentials, and this is achieved either by allowing the verifier to request the issuer to check the validity of a specific part of a VP or by allowing the holder to interact with the issuer (during a VP or ahead of time) to generate additional ZK proofs. Requests to or interactions with the issuer preserve the user's anonymity. In particular, interactions with the issuer are entirely oblivious [i.218]: the issuer does not need to authenticate or verify anything about the user with whom it is interacting and can neither link the interaction to another performed by the same user, nor learn anything about the user's credential attributes. BBS# has been implemented as a prototype on several smartphones using different secure execution environments. [i.78] reports that the time required to generate and verify a presentation is less than 70 ms in high-end mobile devices (using Android StrongBox) and that the size of a BBS# presentation proof is 416 + U × 32 bytes, where U denotes the number of hidden attributes. More information about BBS# is available in clause 4.4.3.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.6.5 Fast IDentity Online with Anonymous Credentials (FIDO-AC)
|
The anonymous credential approaches presented above, with the exception of BBS#, are usually motivated by first considering anonymous credentials and then devising mechanisms to make them compatible with legacy credential formats and hardware, particularly for holder binding. Yeoh et al. have published the research paper "Fast IDentity Online with Anonymous Credentials (FIDO-AC)" [i.269] that is motivated by the converse direction, yet leading to similar constructions: They observe that the FIDO2 standard [i.110] that is used for passwordless authentication and strongly connected to legacy signature mechanisms supported by secure hardware (RSA and ECDSA) is not privacy-friendly, and so is its combination with revealing digital certificates bound to the corresponding cryptographic keys. As such, Yeoh et al. [i.269] enrich FIDO-based authentication with the presentation of attributes of a credential connected to the same cryptographic keypair in a privacy-oriented way that facilitates unlinkability and predicates. Thereby, from a functional perspective, it corresponds to a construction concerning compatibility with legacy formats: There is compatibility with legacy secure hardware, which commonly supports the FIDO2 protocol, and the anonymous credential is constructed from ICAO-compliant electronic passports, which differ from the mdoc standard (ISO/IEC 18013-5 [i.181]) yet have large-scale adoption with around a billion issued electronic passports. Creating anonymous credentials from electronic passports has more broadly received attention, e.g. in the zk-creds paper [i.231] and projects like zkPassport [i.270] - however, it should be noted that owing to the lack of active authentication (involving a PIN) in most electronic passports, this is not suitable for meeting a substantial level of assurance in remote identification owing to the lack of a second factor. As such, the work not only focuses on cryptographic constructions but also on how the FIDO protocol would need to be augmented to facilitate the selective presentation of attributes and predicates of associated digital certificates, akin to how OID4VP would need to be extended to facilitate ZKP-based presentations of (Q)EAA (see also clause 6.8.2). As in Paquin et al. [i.219], the Groth16 ZKP system is chosen to implement the anonymous credential capabilities, which leads to a proving time of around 3 seconds on a Google Pixel 6 (Frigo and Shelat [i.113] use the same device for benchmarking). The authors also claim that preprocessing can further reduce the time for creating the ZKP, yet they do not further specify how the relying party's random challenge is considered. The core contribution of the paper, therefore, seems to be rather the provably secure integration of presenting anonymous credentials from legacy formats into the FIDO protocol than designing a novel cryptographic algorithm or performance optimizations of circuit-based ZKPs. With the affordances of the FIDO2 protocol (that caters for strong authentication and phishing protection) being largely covered by the means of presenting (Q)EAA at least at a substantial level of assurance, the relevance of this work ([i.269]) for data minimization in presentations with the EUDI Wallets may, therefore, be limited.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.7 ISO mobile driving license (ISO mDL)
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.7.1 Introduction to ISO/IEC 18013-5 (ISO mDL)
|
The ISO mobile driving license (ISO mDL) is specified in the ISO/IEC 18013-5 [i.181] standard, which on a high level can be divided in the device retrieval flow (see clause 6.7.2) and the server retrieval flows (see clause 6.7.3) for selective disclosure of the user's mdoc. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 91 ISO/IEC CD 18013-7 [i.182] is a draft specification that extends the ISO/IEC 18013-5 [i.181] standard with unattended flows (see clause 6.7.3), which are online protocols for selective disclosure of the user's mdoc to a web hosted mdoc reader.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.7.2 ISO/IEC 18013-5 (device retrieval flow)
|
The device retrieval flow is described in ISO/IEC 18013-5 [i.181], clauses 6.3.2, 6.3.2.1 (as flow 1) and 6.3.2.4. The credential format is the mdoc, which contains the attributes about the user, in conjunction with the Mobile Security Object (MSO). The MSO is a signed object that contains a list of salted attribute hashes of the user's attributes. The MSO caters for selective disclosure based on the salted attribute hashes as described in clause 5.3.2. The selected attributes of the mdoc and the MSO are presented by the user's mdoc app to an mdoc reader by using BLE, NFC or WiFi. The mdoc reader verifies the MSO and the selectively disclosed attributes (see clause D.2 for more information on the device retrieval flow). ISO/IEC 18013-5 [i.181] is considered mature, and several device retrieval solutions have been deployed in production, for example in a number of states in the US. The MSO and DeviceSignedItems can be signed with cryptographic algorithms that are currently approved by SOG-IS [i.237]. Since the MSO and DeviceSignedItems are signed with a COSE-formatted signature, this caters for MSOs to be signed in the future with QSC algorithms as discussed in the IETF report "JOSE and COSE Encoding for Post-Quantum Signatures" [i.149]. NOTE: Although DeviceSignedItems can be signed with candidate quantum-safe signatures, the issue of having a quantum-safe key agreement mechanism to secure the communication channel remains. The ephemeral session keys between the mdoc device and the reader are currently exchanged using the ECKA-DH key agreement, which is vulnerable to quantum computing attacks. Furthermore, MAC signatures are mentioned in ISO/IEC 18013-5 [i.181] as offering better privacy guarantee, but the MAC secret is derived from an ECKA-DH key agreement, which is exposed to the quantum computing vulnerability. An extensive analysis of the session key exchange goes beyond the scope of the present document, however, but this quantum computing vulnerability should be observed. The device retrieval flow has been selected as a PID protocol for the EUDI Wallet as specified in the ARF [i.71]. An extensive analysis of the device retrieval flow, and how it can be applied for eIDAS2 QTSPs and EUDI Wallet PID/(Q)AEE, is available in clause 7.2.3.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.7.3 ISO/IEC 18013-5 (server retrieval flows)
|
The server retrieval flows are described in ISO/IEC 18013-5 [i.181], clause 9.2. The server retrieval flow can be initialized as a hybrid device/server process (see annex D.2.2) or as a server process (see annex D.2.3). Once the server retrieval flow has been initialized, it continues with either the WebAPI flow or the OpenID Connect (OIDC) flow. In the WebAPI flow the mdoc Reader submits a server retrieval WebAPI Request with a list of requested DataElements to the Issuing Authority. Upon the user's consent, the Issuing Authority will reply with the mdoc Response with the selected and disclosed DataElements (see clause D.2.4 for more information). In the OIDC flow the mdoc Reader (OIDC client) submits a server retrieval OIDC Request with the requested data elements (JWT claims) to the Issuing Authority, which operates an OIDC Authorization Server. This activates the OIDC authorization code flow [i.212]. Based on the user's consent, the Issuing Authority (OIDC Authorization Server) will reply to the mdoc Reader (OIDC client) with the OIDC Token with the selected and disclosed JWT claims about the user (see clause D.2.5 for more information). ISO/IEC 18013-5 [i.181] and OIDC standards are considered mature, and several server retrieval solutions have been deployed in production, for example in a number of states in the US. The WebAPI and OIDC tokens are JWTs that can be signed with cryptographic algorithms that are currently approved by SOG-IS [i.237]. Since the WebAPI and OIDC tokens are signed with a JOSE-formatted signature, this caters for those JWTs to be signed in the future with QSC algorithms as discussed in the IETF report "JOSE and COSE Encoding for Post-Quantum Signatures" [i.149]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 92 An extensive analysis of the server retrieval flow, and how it can be applied for eIDAS2 QTSPs and EUDI Wallet PID/(Q)AEE, is available in clause D.2.4.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.7.4 ISO/IEC 18013-7 (unattended flow)
|
ISO/IEC CD 18013-7 [i.182] draft standard extends ISO/IEC 18013-5 [i.181] with the unattended flow, i.e. the online flow whereby an mdoc app connects directly to an mdoc reader that is hosted as a web server application. ISO/IEC CD 18013-7 [i.182] is backward compatible with the protocols specified in ISO/IEC 18013-5 [i.181]. ISO/IEC CD 18013-7 [i.182] unattended flow is based on the following protocols: • Device Retrieval from an mdoc app to a web server application by using REST APIs over HTTPS POST; this flow is described in clause D.3.1. • OpenID for Verifiable Presentations (OID4VP) [i.214] in conjunction with Self-issued OpenID Provider v2 (SIOP2) [i.216]; this flow is described in clause D.3.2. Both protocols for the unattended flow transmit the selectively disclosed mdoc attributes in conjunction with the MSO from the mdoc app to the mdoc reader. The mdoc attributes and the MSO are verified according to the same principles as for the mdoc device retrieval flow (see clause 7.2.3). As described in clause 6.7.1, the MSO can be signed with SOG-IS approved cryptographic algorithms and allows for QSC algorithms for future use. ISO/IEC CD 18013-7 [i.182] is still a draft, so there are no real deployments in production. NIST NCCoE will carry out interoperability tests [i.206] with an ISO/IEC CD 18013-7 [i.182] compatible reader during the course of 2023 and 2024. The proximity unattended flow has been selected as a PID protocol for the EUDI Wallet as specified in the ARF [i.71]. An extensive analysis of the unattended flow, and how it can be applied for eIDAS2 QTSPs and EUDI Wallet PID/(Q)AEE, is available in clause D.3.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.7.5 ISO/IEC 23220-4 (operational protocols)
|
ISO/IEC CD 23220-4 [i.187] is a draft specification describing operational (presentation) protocols for a digital wallet. The specification expands on ISO/IEC 18013-5 [i.181] with reader engagement, internet online connections to a reader, and bridges to additional standards for user authorization such as OID4VP [i.214] and credential formats such as W3C Verifiable Credentials [i.264]. ISO/IEC CD 23220-4 [i.187] presentation protocols are based on the following protocols: • Device Retrieval from a digital wallet to a web server application by using REST APIs over HTTPS POST. • OpenID for Verifiable Presentations (OID4VP) [i.214] in conjunction with Self-issued OpenID Provider v2 (SIOP2) [i.216]. More specifically, Annex B in ISO/IEC CD 18013-7 [i.182] draft specification refers to ISO/IEC CD 23220-4 [i.187] for the OID4VP/SIOP2 profile to be used for presentation of the mdoc in an ISO/IEC CD 18013-7 [i.182] unattended flow. As described in clause 6.7.1, the MSO can be signed with SOG-IS approved cryptographic algorithms and allows for QSC algorithms for future use. Furthermore, Annex B in ISO/IEC CD 23220-4 [i.187] WD9 describes how to present W3C Verifiable Credentials [i.264] in conjunction with IETF SD-JWT [i.155] for selective disclosure. The SD-JWT can be signed with SOG-IS approved cryptographic algorithms and allows for QSC algorithms for future use (see clause 7.3). In order to secure the HTTPS connection to an online reader (relying party), ISO/IEC CD 23220-4 [i.187] recommends the use of QWACs. ISO/IEC CD 23220-4 [i.187] is still a draft, so there are no real deployments in production. However, the ARF [i.71] refers to ISO/IEC CD 23220-4 [i.187] as an alternative attestation exchange REST API protocol. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 93
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.8 OpenID for Verifiable Credentials (OpenID4VC)
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.8.1 OpenID for Verifiable Credential Issuance (OpenID4VCI / OID4VCI)
|
OpenID for Verifiable Credential Issuance specifies an OAuth-protected API for the issuance of Verifiable Credentials of different formats. To enable secure digital credential issuance and provisioning across different platforms and providers, a standardized protocol is essential. The protocol provides support for different options and features to meet the requirements of different ecosystems and Issuers: • Initiation of flows by the Wallet or the Issuer • User information and authorization via an Authorization Server or out-of-band mechanisms • Immediate or deferred issuance for cases where the Issuer cannot issue credentials immediately • Batch Issuance - a way to request and receive batches of credentials bound to different keys • Securely binding credentials to secret keys generated by the Wallet • Key Attestations that provide additional trust in the key storage (WSCD) • Wallet Attestations that provides additional trust in the Wallet application and device • Display metadata that allows for a customized display of credentials within a Wallet Describing the whole protocol and all of its features in detail would be too much for the scope of the present document. Instead, the present clause will focus on some key features around key derivation, batch issuance and general support for ZKPs. OpenID4VCI has built-in support for batch issuance. The flow starts with discoverable issuer metadata by which an Issuer can signal if batch issuance is supported and if so, what maximum batch sizes can be requested in one interaction. Upon discovering the support for batch issuance and after successful authorization, a Wallet can choose how many credentials it wants to receive by sending the appropriate amount of proof objects (either a proof of possession or a key attestation) for key-bound credentials. The Issuer checks those proofs for correctness and responds with an array of credentials bound to the keys that were provided. Batch issuance and the issuance of a single credential do not differ in the general flows, just in the amount of key proofs and credentials. OpenID4VCI has a defined extension point for proving possession of private keys in a credential request by the Wallet. There are currently 3 defined types, a jwt proof that is a simple proof of possession by which the Wallet demonstrates possession of a private key that belongs to a public key that a credential gets bound to that is used for mdoc and SD-JWT, a proof of possession for W3C based credentials using Data Integrity proofs, and key attestations where the key storage of a Wallet makes a trusted statement that specific keys were generated in hardware. To better support key derivation mechanisms like ARKG, another variant could be added to allow a single proof of possession for a public master key and the issuance of a batch of credentials bound to derived keys. While this is currently not part of the OpenID4VCI specification, it is part of ongoing discussions and would be easy to add leveraging the defined extension points.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.8.2 OpenID for Verifiable Presentations (OpenID4VP / OID4VP)
|
OpenID for Verifiable Presentations is a mechanism to request and deliver presentations of digital credentials of different credential formats. OID4VP is built on top of OAuth2.0 and extends it by introducing a new return type called VP Token that serves as a container holding one or more presentations that a Wallet provides to a Relying Party. Similar to OD4VCI, OID4VP was built in a way to be credential format agnostic and pre-defines credential format profiles for mdoc, SD-JWT VC, and W3C VCDM based credentials. OID4VP supports two different flows, the same device flow where the wallet is invoked from another application (e.g. the browser) on the same device, or a cross device flow where the wallet is invoked from a different device. It describes a profile that allows OID4VP to work with the Digital Credentials (DC) API that allows browsers to natively invoke wallets using the DC API and improves the security of cross device flows by leveraging the underlying transport of the Client to Authenticator Protocol (CTAP). OID4VP provides for different options to authenticate a Relying Party by introducing a new client ID Prefix mechanism that allows different ecosystems to use different trust mechanisms. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 94 OID4VP also introduces a transaction data mechanism where a credential presentation can also be bound to a specific transaction authorization, allowing for authorization for use-cases like digital document signatures, and payment transactions. OID4VP introduces its own JSON based query language called Digital Credentials Query Language (DCQL) that: • is credential format agnostic; • allows for queries spanning multiple credentials; • allows for different options to fulfil a request (A or B type queries); • allows to query trusted authorities - matching issuers or trust frameworks. DCQL was designed in a way that most parts of the query language do not depend on the credential format, but some parts like type matching are defined per credential format. This allows for the overall query language to stay relatively simple but cater for the different requirements and payloads of the different credential formats. DCQL allows to express requests on individual attribute level and also supports optional filtering on expected values. A response within OID4VP allows for an easy matching from the different presentations provided in a VP token to the specific query parts in the request they belong to. This is especially important for queries that have some level of optionality. While DCQL was not built explicitly for Zero-Knowledge Proofs, it can be used for simple queries for ZKPs and there are extension points for more complex constructions like predicate proofs or composite proofs that can be added once those requirements are fully understood.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.8.3 OpenID4VC High Assurance Interoperability Profile (HAIP)
|
The OpenID4VC High Assurance Interoperability Profile (HAIP) defines a profile of the OID4VCI and OID4VP protocols and mdoc and SD-JWT VC credential formats. The aim of the profile is to define a subset of features and a set of mandatory requirements for those specifications to create interoperable implementations for use-cases where a high level of security and privacy is required. The profile does not define trust management, but mandates support for: • X.509 based verifier authentication (signed requests); • Response Encryption (for both OID4VCI, and OID4VP); • specific signature and encryption algorithms. The core goal of HAIP is to narrow down choices of the different protocols and credential formats to create a manageable subset of choices that will guarantee interoperable and secure implementations.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.9 The Iden3 protocol
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.9.1 Introduction to the Iden3 protocol
|
Iden3 [i.139] has its origin in the development of Circom [i.68], which is a language to implement constraint systems for ZKPs, and SnarkJS, a library for generating and verifying zk-SNARKs based on the Groth16 proof system. Circom has been benchmarked in the report "Benchmarking ZK-Friendly Hash Functions and SNARK Proving Systems for EVM-compatible Blockchains" [i.127] and SnarkJS is the first general-purpose zk-SNARK prover capable of running on edge devices. As such, the work underlies the implementation of Crescent [i.219] and the proposal by Babel and Sedlmeir [i.14]. Circom has undergone major improvements since its first release and is one of the more popular frameworks for implementing ZKPs in the Web3 space, with an active development community that implemented complex libraries for, e.g. ECDSA signature verification [i.69] and regular expression verification [i.272].
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.9.2 Cryptography behind the Iden3 protocol
|
The Iden3 protocol uses zk-SNARKs to conduct efficient verification for regular and blockchain (smart contract) applications completely removing the need for verifier to communicate with issuer to perform verification of zero-knowledge proofs of predicates and selective disclosure, as well as verification of credential non-revocation. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 95 Iden3 utilizes a combination of the Groth16 zk-SNARK proving scheme (on the BN254 curve), Sparse Merkle Trees, BabyJubJub EdDSA signatures, and the Poseidon hash function [i.122]. • Groth16 [i.124] is a widely used zk-SNARK, especially in the blockchain space, because of its performance to on-chain verification cost ratio. The Iden3 protocol is designed to be pluggable such that it can utilize different proof systems. Plonk is another zk-SNARK system, which the Iden3 protocol is capable of utilizing to generate ZK proofs. • Iden3 utilizes Sparse Merkle Trees (SMTs), which are a variation of Merkle Trees allowing not only proving set inclusion, but also non-inclusion as part of its cryptographic infrastructure to facilitate data integrity and selective disclosure. By using SMTs, Iden3 can verify data elements in large sets of data by hashing them into a single root hash. The Iden3SparseMerkleTreeProof [i.140] proves that the specific issuer has issued this verifiable credential by a Merkle tree proof that this claim is included in the issuer's Claims Merkle tree, and is therefore in the issuer's state. This state is published by the issuer to a trusted ledger. Since this algorithm does not use any kind of signature it is stronger against potential quantum attacks, versus other signature algorithms. • BabyJubJub [i.268] is an elliptic curve, which can be used inside any zk-SNARK circuit, utilizing a BN254 pairing friendly elliptic curve. To verify a zk-SNARK proof, it is necessary to use an elliptic curve. The basic curve is alt_bn128 (also referred to as BN254), which has prime order r. But while it is possible to generate and validate proofs of F_r -arithmetic circuits with BN254, it is not possible to use BN254 to implement elliptic-curve cryptography within these circuits. Baby Jubjub is an elliptic curve defined over the finite field F_r which can be used inside any zk-SNARK circuit, allowing for the implementation of cryptographic primitives that make use of elliptic curves, such as the Pedersen Hash, Poseidon Hash or the Edwards Digital Signature Algorithm (EdDSA). Baby Jubjub curve satisfies the SafeCurves criteria. Baby Jubjub is a twisted Edwards curve birationally equivalent to a Montgomery curve [i.52]. The algorithm chosen for generating Baby Jubjub is based on the criteria defined in IETF RFC 7748 [i.166]. In the context of ZKPs, SMTs enable Iden3 to allow users to generate predicate proofs and selectively disclose specific VC attributes while maintaining the privacy of other elements. This data structure is also used to implement one of the ways to issue credentials, privacy-preserving credential revocation methods and hiding a user's real identity behind identity "Profiles" (pseudonymous identifiers with strong cryptographic identity holder binding).
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
6.9.3 Implementation aspects of the Iden3 protocol
|
The implementation of PrivadoID / Billions Network leverages the tooling that was originally developed for a blockchain-based rollup, which also includes optimizations of the Rapidsnark assembly prover that has been extended from servers to mobile phone instruction sets to improve client-side proof generation performance. While the Iden3 implementation is based on a circuit-based ZKP, the current implementation is not focused on compatibility with legacy formats. The main motivation for doing so follows the line of reasoning of the zk-creds paper [i.231]: With ZKP-friendly hashes like Poseidon being fast to prove in a circuit-based ZKP compared to digital signatures, verification can leverage a blockchain-based list of hashes or a cryptographic accumulator, e.g. a Merkle tree) that cryptographically anchors the certificate. This list can then also be used as a revocation registry; yet, the tradeoff is that offline verification without revocation checks is not possible anymore in this setting. Alternatively, the Iden3 implementation offers a signature-based verification based on EdDSA - essentially, a Schnorr signature on the ZKP-friendly BabyJubJub curve [i.268]. EdDSA is also used by Babel and Sedlmeir [i.14] to obtain a faster proving speed. While binding to legacy hardware via RSA or ECDSA signatures could be implemented by leveraging existing tooling for Circom or corresponding improved variants in Crescent [i.219] or the work "Anonymous credentials from ECDSA" by Frigo and Shelat [i.113], the main application of the Iden3 protocol thus far has been the on-chain verification of credentials in blockchain projects, which typically tend to require less compatibility with legacy systems and at the same time often prioritize proof size over proof generation effort, such that Groth16 [i.124] is a suitable choice. The credential format used by Iden3 is JSON Web Zero-knowledge (JWZ) [i.141], which is described in clause 5.5.3. Yet, it should be noted that the Iden3 implementation also features an implementation of "anonymous Aadhaar" that showcases compatibility with legacy certificate formats involving RSA signatures (imported from the zk-email project). ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 96 Notably, while the Iden3 stack focuses on developing several smart contracts for identity verification, the use in bilateral settings is straightforward and the corresponding verifier components are required artifacts in the development of the corresponding smart contracts for blockchain-based verification. The circuits provide different means of credential presentation, including the selective disclosure of single attributes and corresponding range proofs. Moreover, the implementation is compatible with the emerging verifiable credentials standard, although it should be mentioned that the cryptographic representation of the certificate is different to avoid the need for costly parsing. Besides the cryptographic implementation, a valuable feature offered by the Iden3 stack is the development of a query language that simplifies the formulation of checks expected by the relying party. The corresponding modular design of the circuits hence obviates the need for storing a high number of proving keys (generated during the trusted setup) in the EUDI Wallet. 7 Implications of selective disclosure on standards for (Q)EAA/PID
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.1 General implications
|
The purpose of clause 7 is to analyse the implications of selective disclosure and unlinkability on ETSI standards for (Q)EAAs and PIDs. More specifically, the (Q)EAA/PID credentials discussed in the following clauses 7.2 and 7.3 are scoped to ISO/IEC 18013-5 [i.181] mdoc and SD-JWT, because these formats are explicitly specified as selective disclosure formats for PIDs in the ARF [i.71]. The main reason why mdoc and SD-JWT were selected in the ARF [i.71] as (Q)EAA/PID credentials is that they can be signed with cryptographic algorithms that are currently approved by SOG-IS [i.237], and that the credentials also allow for being signed with Quantum-Safe Cryptography (QSC) algorithms for future use. More technical details on how the issuer may apply such signatures on mdoc and SD-JWT are discussed in clauses 7.2.1 and 7.3.1 respectively. Furthermore, clause 7.4 analyses the possibilities of using BBS+ credentials as (Q)EAA/PID. The reason for analysing BBS+ is due to the emerging ISO standardization of BBS+, which may be used with W3C VCDM in conjunction with W3C VCDI. Since BBS+ with blinded signatures is fully unlinkable, it would be a viable alternative from a privacy preserving perspective. This in turn may cater for BBS+ to be referenced in a future version of the ARF and/or the ETSI TS 119 472-1 [i.97] standard on (Q)EAAs profiles. Also, clause 7.5 analyses solutions that utilize programmable ZKPs such as zk-SNARKs in conjunction with existing digital infrastructures. The reason for analysing such solutions is that they can provide fully unlinkable presentations that provide selectively disclosed attributes and revocation information, based on existing eIDAS X.509 QCs and the forthcoming eIDAS2 (Q)EAAs/PIDs. This in turn may cater for zk-SNARK based solutions to be referenced in a future version of the ARF and/or the ETSI TS 119 462 [i.95] standard on EUDI Wallet interfaces. The analysis in clause 7 is primarily focused on selective disclosure and unlinkability since those characteristics are defined in eIDAS2 [i.103] and the ARF outline [i.70]. Predicates are described on a high level, with proposals on how to implement them for the selected PID credentials mdoc and SD-JWT. The selected (Q)EAA/PID credentials are analysed with respect to the issuance by a QTSP/PIDP, how the credentials are stored in the EUDI Wallet, and how selected attributes are presented to a relying party. Firstly, it is analysed how the QTSP or PID provider may issue (Q)EAAs/PIDs with capabilities for selective disclosure. This analysis also describes the PKI trust models for the issuance process and whether EU Trusted Lists (EU TLs) can be applied. Furthermore, it is described how the (Q)EAAs/PIDs should be issued to cater for unlinkability. The recommended policies and practices for such QTSP/PIDP issuance processes are discussed for mdoc in clause 7.2 and SD-JWT in clause 7.3. Secondly, it is analysed how the (Q)EAAs/PIDs with capabilities for selective disclosure and unlinkability are stored in the EUDI Wallet. This analysis also describes the associated cryptographic keys used for proving the user's ownership of the (Q)EAAs/PIDs. The implications for storing the (Q)EAAs/PIDs with selective disclosure in an EUDI Wallet are discussed for mdoc in clause 7.2 and SD-JWT in clause 7.3. Thirdly, it is analysed how the selected attributes can be presented to a relying party, yet sustaining unlinkability. The recommended policies and practices for presenting the (Q)EAAs/PIDs with an EUDI Wallet are discussed for mdoc in clause 7.2 and SD-JWT in clause 7.3. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 97
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2 Implications for mdoc with selective disclosure
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.1 QTSP/PIDP issuing mdoc
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.1.1 General
|
The mdoc, as specified in ISO/IEC 18013-5 [i.181], is composed by the the user's elements, the authentication key, and the Mobile Security Object (MSO) with a signed list of salted hash values of these elements. The MSO is a CBOR-encoded [i.170] object, which is signed by the issuer with a COSE-formatted signature [i.167]. ISO/IEC 18013-5 [i.181] describes the Issuing Authority Certification Authority (IACA) that is the root CA that used for issuing subordinated certificates, which in turn are used for signing the user's MSOs, signing revocation data (OCSP-responses and CRLs), and securing online services (JWS and TLS). Clauses 7.2.1.2 to 7.2.1.6 compare and map the requirements on ISO mDL compliant IACAs into considerations for eIDAS2 compliant QTSPs/PIDPs when issuing ISO mDL with capabilities for selective disclosure and (predetermined) predicates. Clauses 7.2.1.2 to 7.2.1.6 also provide a summary of the ISO mDL and its Issuing Authorities, but it is recommended to have studied ISO/IEC 18013-5 [i.181] before to have an understanding of the ISO mDL ecosystem.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.1.2 Certificate profiles
|
The IACA's trust anchor is a DER-encoded X.509 certificate that should be issued according to the certificate profile in ISO/IEC 18013-5 [i.181], Annex B.1. ISO/IEC 18013-5 [i.181], Annex B.1.1 declares that all X.509 certificates are DER-encoded and specifies the generic certificate requirements on certificate extensions and subjects. The IACA certificate profile also defines the cryptographic algorithms that are approved by ISO/IEC 18013-5 [i.181]. In the context of eIDAS2, the cryptographic algorithms used in the QTSP/PIDP CA certificates are required to comply with the SOG-IS list of EU approved cryptographic algorithms [i.237]. Hence, the QTSP/PIDP CA certificates used for issuing ISO mDLs are required to comply with the intersection of IACA's and SOG-IS' requirements on cryptographic algorithms. EXAMPLE 1: SOG-IS [i.237], section 4.3 "Discrete Logarithm in Elliptic Curves" lists the following approved ECC curves: BrainpoolP256r1, BrainpoolP384r1, and BrainpoolP512r1. EXAMPLE 2: ISO/IEC 18013-5 [i.181], Table B.3 "Document signer certificate" lists the following approved ECC curves: BrainpoolP256r1, BrainpoolP320r1, BrainpoolP384r1, BrainpoolP512r1, Curve P-256, Curve P-384, and Curve P-521. The IACA trust anchor is used for issuing the following subordinated certificates in an IACA PKI: • mDL MSO signer certificate (ISO/IEC 18013-5 [i.181], Annex B.1.2). • JWS signing certificate (ISO/IEC 18013-5 [i.181], Annex B.1.3.1). • TLS server certificate issuing authority (ISO/IEC 18013-5 [i.181], Annex B.1.6). • TLS client authentication certificate (ISO/IEC 18013-5 [i.181], Annex B.1.8). • OCSP signer certificate (ISO/IEC 18013-5 [i.181], Annex B.1.9). Furthermore, the ISO mDL IACA CRL profile is specified in Annex B.2 in ISO/IEC 18013-5 [i.181]. An eIDAS2 QTSP/PIDP that issues ISO mDLs should adhere to the IACA PKI and the certificate and CRL profiles described above. One more alternative could be for ETSI to assign a specific QC extension to be used for trust anchor certificates that are used by accredited QTSPs to issue ISO mDLs. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 98
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.1.3 Trusted Lists
|
According to article 22(1) of eIDAS [i.104], each EU Member State is required to publish a Trusted List (TL) with all QTSPs in that EU Member State. All information referred to in eIDAS article 22(3), including the location and signing certificates of the TLs, is compiled in the EU List Of Trusted Lists (LOTL). Furthermore, the Commission Implementing Directive (CID) 2015/1505 [i.100] mandates the use of ETSI TS 119 612 [i.94] for the implementation of the trusted lists. ETSI TS 119 612 [i.94] specifies the format and mechanisms for establishing, locating, accessing and authenticating trusted lists. The EU TLs and EU TOTL are XML-encoded according to specific XML schemas and signed with XAdES-signatures as specified in ETSI TS 119 612 [i.94]. ISO/IEC 18013-5 [i.181] has introduced a similar concept called Verified Issuer Certificate Authority List (VICAL), which contains the trustworthy IACA's that issue certificates for creating and operating ISO mDLs. An ISO mDL VICAL can be formatted and signed either in CDDL [i.170] or CMS [i.157] format. The ISO mDL VICAL Providers publishes the VICALs. ISO/IEC 18013-5 [i.181], Annex C specifies the policy and security requirements and technical and procedural controls for a VICAL Provider. NOTE: ISO/IEC 18013-5 [i.181], Annex C refers to ETSI EN 319 411-1 [i.90] and FPKIPA X.509 Certificate Policy For The U.S. Federal PKI Common Policy Framework [i.109] for the operations of an ISO mDL VICAL Provider. Hence, there are synergies between the EU TLs and the ISO mDL VICALs, in the sense that both trusted lists contain trust anchors. The main differences are the encodings and signature formats (EU TL XML/XAdES versus ISO mDL VICAL CDDL/CMS). In order to bridge this gap, ETSI TS 119 612 [i.94] may specify a CDDL/CMS profile of the EU TL that is compatible with the ISO mDL VICAL, or ISO/IEC 18013-5 [i.181] may be extended to specify an XML profile of the VICAL that is compatible with the ETSI EU TLs. In such a scenario, an eIDAS2 accredited QTSP/PIDP could issue CA certificates that are included in an EU TL, which in turn could be trusted as a VICAL in the ISO mDL ecosystem. In summary, transposing ISO/IEC 18013-5 [i.181], Annex C to an eIDAS2 context results in the following recommendations: • The ISO mDL Issuing Authority corresponds to the eIDAS2 QTSP/PIDP. • The IACA trust anchor should be issued as a trust anchor by the eIDAS2 QTSP/PIDP that issues ISO mDL as (Q)EAA/PID. • The eIDAS2 QTSP/PIDP should ensure that its IACA trust anchor is published in the EU TL, which is issued by the supervisory body in the applicable EU Member State. • ETSI TS 119 612 [i.94] may specify an additional CDDL/CMS profile of the EU TL that is compatible with the ISO mDL VICAL, or ISO/IEC 18013-5 [i.181] may be extended to specify an XML profile of the VICAL that is compatible with the ETSI EU TLs. • The EU TLs may include a specific extension for the QTSPs that are authorized to issue QEAAs that also are compliant with ISO mDL; the EU TL extension can reference the ISO mDL VICAL where the QTSP is also listed.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.1.4 Issuance of mdocs
|
An mdoc, which has been issued to the user's EUDI Wallet on a device, is essentially composed of the user's data elements and the MSO, which are associated with the mdoc authentication key (see clause 7.2.2). The data elements inside an mdoc consist of an unsigned list of the user's elements belonging to the nameSpace "org.iso.18013.5.1", as defined in ISO/IEC 18013-5 [i.181] for mDL. The MSO (mobile security object) is defined in ISO/IEC 18013-5 [i.181], section 9.1.2.4 as a signed object, which contains the mDL authentication public key and a list of salted attribute hashes of the user's elements. The MSO is signed with a COSE-formatted signature, by the IACA's MSO signer certificate. NOTE 1: In the context of eIDAS2, a QTSP/PIDP will issue an MSO signer certificate with cryptographic algorithms that are approved by both SOG-IS [i.237] and ISO/IEC 18013-5 [i.181]. NOTE 2: Since the MSO's signature is COSE-formatted, QSC algorithms can also be considered for the future according to the IETF IESG report [i.149]. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 99 According to section E.8.4 of ISO/IEC 18013-5 [i.181] and sections E.8.4 and E.5 of ISO/IEC CD 23220-3 [i.186] it is recommended that the mdoc authentication keys and related MSOs are updated frequently to achieve unlinkability when presenting the ISO mDL elements multiple times. Hence, the QTSP/PIDP should establish processes for issuing multiple MSOs to the user's EUDI Wallet, typically in batches prior to the device retrieval use of the MSOs. The EUDI Wallet may also signal to the QTSP/PIDP when it is necessary to refresh the MSOs. When issuing a new MSO, the random salts in IssuerSignedItems for the hash calculations should be unique such that the random salted hash values differ for each MSO, even if the user's data elements remain the same. EXAMPLE 1: Assume that the user's GivenName in the mdoc is "Smith". If the GivenName is combined with random salt S1 and hashed, the resulting hash value becomes H1 in the first MSO. If the same GivenName name is combined with another random salt S2 and hashed, the resulting hash value becomes H2 in the second MSO. mdoc does not support predicates in the sense that Zero-Knowledge Proofs or range proofs can be dynamically derived based on the elements in the mdoc. However, ISO/IEC 18013-5 [i.181], section 7.2.5 specifies the possibility to insert predetermined Boolean elements as "age_over_NN" in the ISO mDL. EXAMPLE 2: The Boolean statement "age_over_18" could be an element in the mdoc. NOTE 3: It is possible to include signed computational inputs and parameters to enable dynamic predicates (see clause B.1). In order to achieve (predetermined) predicates, the issuing QTSP/PIDP should establish processes to identify the relevant Boolean statements and insert them as elements in the mdoc.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.1.5 Comparison with ETSI certificate profiles for Open Banking (PSD2)
|
ETSI TC ESI has specified certificate profiles and TSP policy requirements for Open Banking in the sector specific ETSI TS 119 495 [i.93]. The scope of ETSI TS 119 495 [i.93] is: • Specifies requirements for qualified certificates for electronic seals and website authentication, to be used by payment service providers in order to meet needs of Open Banking including the EU PSD2 [i.101] Regulatory Technical Standards (RTS) [i.98]. • Specifies additional TSP policy requirements for the management (including verification and revocation) of additional certificate attributes as required by the above profiles. In summary, a QTSP can issue PSD2 compliant certificates (QWACs or QCert for eSeal), using the certificate profile specified in ETSI TS 119 495 [i.93] as follows. The PSD2 specific attributes are checked by the (Q)TSP as part of the identity proofing, as specified in the ETSI TS 119 495 [i.93], REG-6.2.2-1, which states: "The TSP shall verify the Open Banking Attributes (see clauses 5.1 and 5.2) provided by the subject using authentic information from the Competent Authority (e.g. a national public register, EBA PSD2 Register, EBA Credit Institution Register, authenticated letter)." The European Banking Association (EBA) maintains a register of payment institutions [i.86], which can be used for that purpose. As a result, a QCStatement extension with Open Banking attributes is included in the PSD2 certificate, which proves its compliance with the PSD2 RTS. A relying party intending to validate a PSD2 certificate usually performs a two step validation approach: 1) The relying party validates the qualified status of the certificate using the EU TLs. 2) The relying party confirms the correctness of the PSD2 attributes included in the certificate QCStatement using either the national public registers, or the EBA register. The relying parties need to have out-of-band knowledge of where to retrieve the EBA register. The ETSI TS 119 495 [i.93] requirements for (Q)TSPs issuing PSD2 certificates may partially be re-used also for the issuance of mdocs, but with the following differences: • The format will be (Q)EAA for mdoc instead of X.509 certificates. • The relying party will confirm that the QTSP having issued the (Q)EAA is authorized to issue this specific type of (Q)EAA by looking into a domain-specific list, i.e. the ISO mDL VICAL. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 100 • To facilitate the validation of (Q)EAAs being used ISO mDLs, EU TLs could be used to point towards the domain-specific VICAL list where a QTSP is listed as being authorized for a specific scope. Alternatively, an URI for accessing this domain-specific VICAL list could be included in the mdoc (Q)EAA itself, although this may be too static as this URI may change over time.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.1.6 Mapping of mdoc and eIDAS2 terms
|
As discussed in clauses 7.2.1.1 to 7.2.1.5, there are several equivalences between the terms in ISO/IEC 18013-5 [i.181] and the terms in eIDAS2 [i.103] and the ARF [i.71]. Table 1 provides a mapping of eIDAS2 and ARF terms with the syntax used in ISO/IEC 18013-5 [i.181]. Table 1: Mapping of eIDAS2/ARF and ISO/IEC 18013-5 [i.181] terms Terms in eIDAS2 and the ARF Terms in ISO/IEC 18013-5 [i.181] (mDL) End users of EUDI Wallets mdoc Holder EUDI Wallet issuers Technology Providers Person Identification Data Providers Issuing Authorities Providers of registries of trusted sources (e.g. EU TL) Verified Issuer Certificate Authority List (VICAL) Providers Qualified and non-qualified electronic attestation of attributes (qEAA) providers Issuing Authorities QTSPs for issuing qualified and non-qualified certificate for electronic signature/seal providers Issuing Authority Certification Authority (IACA) Providers of other trust services Not defined Authentic sources Governmental authoritative source Relying parties mdoc Reader, operated by a mdoc verifier Conformity Assessment Bodies (CAB) Auditing Bodies following ISO/IEC 27001 [i.189] and ISO/IEC 27002 [i.190] Supervisory bodies Auditing Bodies following ISO/IEC 27001 [i.189] and ISO/IEC 27002 [i.190] Device manufacturers and related subsystems providers Technology Providers Catalogue of attributes and schemes for the attestations of attribute providers mdoc namespace
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.2 EUDI Wallet mdoc authentication key
|
The mdoc authentication key is used to prevent cloning of the mdoc and to mitigate man in the middle attacks. The mdoc authentication key pair consists of a public and a private key denoted as (SDeviceKey.Priv, SDeviceKey.Pub). The mdoc authentication public key is stored as the DeviceKey element in the MSO, and the corresponding mdoc authentication private key is used for signing the response data contained in the DeviceSignedItems structure (see ISO/IEC 18013-5 [i.181], sections 9.1.3, 9.1.2.4 and 9.1.3.3 for more information). Hence, the mdoc authentication key is used by the EUDI Wallet for authentication of selectively disclosed data elements that are presented to a relying party (see clause 7.2.3). More information on how to store the data elements, MSO, and the mdoc authentication key is available in clause 7.6. See also clause 4.3.4.2 on the possibility to use Hierarchical Deterministic Key derivation functions where the MSO issuer can issue a batch of MSOs, each with a unique and unlinkable DeviceKey element derived from a single DeviceKey element.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.2.3 EUDI Wallet used with ISO mDL flows
|
How the EUDI Wallet can be used with the different types of ISO mDL flows is described in Annex D. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 101
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.3 Implications for SD-JWT selective disclosure
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.3.1 Analysis of using SD-JWT as (Q)EAA format applied to eIDAS2
|
An analysis of the IETF SD-JWT formats applied to an eIDAS2 context results in the following observations and recommendations: • The present document recommends using SD-JWT VC as a standalone attestation format where selective disclosure is required. When verifier unlinkability is required, it is possible to rely on a batch issuance approach where each SD-JWT VC contains unique salts. Each attestation in a batch should also contain a unique public key that the user needs for the holder binding JWT. Clause 4.3.4.2 describes the possibility to use Hierarchical Deterministic Key derivation functions where the SD-JWT VC issuer can issue a batch of SD-JWT VCs, each with a unique and unlinkable public key value derived from a single user controlled public key. • Another option to achieve unlinkability afforded by HAIP is for the user to request specific claims they need to present to a verifier and for the issuer to issue only these claims in the attestation; an approach that fits particularly well with the logic of short lived attestations. • The SD-JWT VC issuer corresponds to a QTSP and/or a PIDP. • The SD-JWT VC verifier corresponds to an eIDAS2 relying party (that will validate the SD-JWT as a (Q)EAA/PID). • The eIDAS2 relying party should use the eIDAS2 EU TL to retrieve the QTSP/PIDP trust anchor and verify with the corresponding x5c header parameter of the SD-JWT VC. • The eIDAS2 relying party should validate the attestation (submitted by the EUDI Wallet) according to the principles described in annex E; the issuer's signature should be validated by using the QTSP/PIDP trust anchor. • The SD-JWT VCs in the EUDI Wallet should all use unique salts as described in annex E to cater for verifier unlinkability when validated by the relying party. NOTE 1: Hence, the QTSP/PIDP would need to issue batchwise SD-JWT VCs in order to cater for multi-show verifier unlinkability. Batch issuance will require an operational procedure of issuing multiple SD-JWT VCs to each device on a regular basis, which may result in an additional operational cost for the QTSP/PIDP. Clause 4.3.4.2 describes an approach where the issuer can derive multiple unique user controlled public keys on the basis of a single user controlled public key. NOTE 2: SD-JWT does not satisfy the requirements of full unlinkability. • The SD-JWT VC is signed by the QTSP/PIDP with a JOSE formatted signature, which allows for SOG-IS approved cryptographic algorithms [i.237] and for QSC for future use [i.149]. • The SD-JWT VC may be signed with an ETSI JAdES signature if supported by the relying party. Thus, the JAdES signature format may contain additional information about revocation information, CA-chains and time-stamps. These observations and recommendations should be considered with respect to selective disclosure for the ETSI work items ETSI TS 119 462 [i.95], ETSI TS 119 471 [i.96] and ETSI TS 119 472-1 [i.97], where also a mapping algorithm for the PID could be proposed. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 102
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4 Feasibility of BBS+ and BBS# applied to eIDAS2
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.1 General
|
The present clause provides an analysis of the feasibility of BBS+ and BBS# applied to eIDAS2. The BBS+ and BBS# schemes are of interest since they cater for issuer and verifier unlinkability, which could support privacy for a user's EUDI Wallet that shares selectively disclosed attributes. The BBS# scheme is of interest since it both leverages a SOG-IS sanctioned protocol (ECDSA or ECSDSA) for holder binding and caters for issuer and verifier unlinkability, which could support privacy for a user's EUDI Wallet that shares selectively disclosed attributes. The following aspects are in scope of the analysis: • The standardization status of BBS+ and BBS#, and if the schemes can be considered for the eIDAS2 regulation [i.103]. • Whether or not a standardized version of BBS+ and BBS# can be applied to the W3C Verifiable Credentials Data Model (VCDM) and ISO mobile driving license (ISO mDL). • Post-quantum aspects of BBS+ and BBS#. • Conclusions of how BBS+ and BBS# may be applied to QTSPs/PIDPs and EUDI Wallets operating under eIDAS2.
|
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.2 Standardization of BBS+ and BBS#
| |
2084e280c44b7014f2db1d54699ca172
|
119 476-1
|
7.4.2.1 Standardization of BBS+
|
In order for BBS+ to be considered for the EUDI Wallet, it would have to be standardized by CEN, ETSI or ISO as declared in the EU regulation 1025/2012 [i.105]. As described in clause 4.4.6.1, a set of anonymous digital signatures schemes are specified in the ISO/IEC 20008 series [i.184]. More specifically, ISO/IEC 20008-2 [i.184] mechanism 3 specifies the cryptographic primitives of a qSDH scheme, which corresponds to BBS04 with single messages [i.27]. BBS04 with single messages is however not practically sufficient for most attestation formats, including the W3C Verifiable Credentials Data Model and SD-JWT VC, which require BBS+ with multi messages. BBS+, which supports multi messages, is however not yet fully standardized. IRTF CFRG is currently in the process of specifying BBS+ in the following IRTF CFRG BBS draft standards: The BBS Signature Scheme [i.177], Blind BBS Signatures [i.175], BBS per Verifier Linkability [i.174]. In parallel, DIF is drafting a specification for blind signatures extension of BBS+ [i.80]. But even when the IETF and DIF standards are finalized they will not have the status such that they can be referenced by the eIDAS2 regulation [i.103]. In order to bridge this gap, ISO/IEC has initiated the standardization work on ISO/IEC 24843 [i.185] on privacy- preserving attribute-based credentials. One objective of ISO/IEC 24843 [i.185] is to formally standardize the multi- message signature scheme version of ISO/IEC 20008-2 [i.184], i.e. BBS+. ISO/IEC are also working on the common draft ISO/IEC CD 27565 [i.191]. More specifically, Annex C of ISO/IEC CD 27565 [i.191] includes an example of selective disclosure by using BBS+, with a reference to the IRTF CFRG BBS draft specification. Hence, the ISO/IEC 24843 [i.185] future standard, possibly in conjunction with ISO/IEC CD 27565 [i.191], has the potential to result in an ISO standardized version of BBS+ as well as other multi-message signature schemes. If these ISO standards on BBS+ will materialize, they may be referred by the eIDAS2 regulation [i.103] and its implementing acts. When such standards become available, the various attestation formats can also detail how BBS+ can be used as a proof mechanism. ETSI ETSI TR 119 476-1 V1.3.1 (2025-08) 103
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.