Draft: KEP-2: Support for multiple keys
This KEP is meant as an alternative proposal to KEP-1, and at most one of them should be accepted.
KEP-2 has a larger scope than KEP-1, and partially reimagines the way KOO stores, manages, and serves keys. This means that implementing this KEP would be more effort than implementing KEP-1. However, it aims to set us up not just for the transition between v4 and v6, but also to post-quantum keys, and generally provides a more flexible base. (Nevertheless, policy decisions such as limitations on the number of keys may still be agreed upon, if desirable.)
Merge request reports
Activity
Thanks for writing this up. For now I have just one remark about the terminology.
I really dislike the term
key block
. First, Werner really likes it. Second, I think it is very imprecise. From the text I gather you mean a non-empty sequence of Transferable Public Keys (TPKs) in an armor container. But if you look at the GnuPG source,key block
means TPK, armored or not, serialized or in core.I have no good alternative to offer, but I'd avoid using the term for TPK (which there is some precedence for), and I'd even more against using it for something else than the established meaning.
In my mind, a keyring is a local collection of all the keys you have, including your own private keys and the public keys of all your contacts. It might be a bit confusing if we use it to describe the keys of one person?
(I think it also still has the same problem Justus mentioned which is that it doesn't tell you anything about how it's encoded, i.e. armored or not, etc.)
Edited by Daniel HuigensWe already have "secret keyrings" and "public keyrings" as distinct objects;
file
emits "PGP/PGP key public ring" when given a TPK file; there is plenty of precedent in the wild. If we want to clarify this particular usage of the term we could qualify it, say "personal keyring" or "published keyring" or some such?And I think encoding-independent terminology is a good thing. Wire formats are an implementation detail that should not normally be displayed in a UI, for example.
@valodim ok, then keeping entirely clear of overloaded terminology, what about "bulletin" in the sense of "terse announcement"?
Edited by Andrew Gallagher
- Resolved by Vincent Breitmoser
Thanks, Daniel, for this proposal. I am especially interested in the how this exactly is going to support the PQC transition. As far as I currently understand, the querying API is agnostic of the PQC aspect of keys. The current PQC draft there are v4 (encryption) and v6 PQC keys (encryption and signature) proposed. From the current proposal it seems that
- API v1 would return v4 PQC keys,
- API v2 would return v4 and v6 PQC keys
whenever such keys are uploaded under the respective e-mail address / fingerprint. Is that really sufficient for the transition? It is known that clients at least choke on unknown key and signature versions. On this background I think the main question is: will clients that implement the v1 and v2 API correctly for traditional v4 and v6 keys but don't implement PQC be able to deal with "key blocks" that contain PQC keys? Will they, in case that a certain key block contains one certificate with only traditional keys and one with PQC keys be able to process at least the one with the traditional keys? Will the client output an error message for the certificate with unknown algorithm (and signature) versions?
Would it maybe make sense to extend at least the v2 protocol (potentially also v1) to (optionally?) tag the keys within a key block with certain predefined keywords? One such keyword could then be "pqc". The default keys which are expected to be supported by all clients might be tagged with a keyword "default" as well. This might prevent the case where one receives unexpected results due to errors emitted when processing a part of the certificates contained in a key block without knowing whether this was an acceptable error condition (e.g., non-PQC client rejects PQC certificate) or a non-acceptable one (e.g., PQC-ready client cannot decode or verify PQC certificate).
Hi Falko, thanks for the comments. To be perfectly honest, I would prefer to avoid the requirement for such signalling in the v2 API, if possible, by explicitly saying up front that clients should be able to deal with a key block which contains some keys which they can parse and some keys which they can't. There's some prose for how clients should deal with that here.
In my opinion, we shouldn't limit the types of incompatibilities that are "acceptable" up front. For example, let's say there comes some additional (non-backwards-compatible) feature after PQC, and someone generates two PQC keys, one with the feature and one without. Then "PQC-ready client cannot decode or verify [one of the] PQC certificate[s]" should be acceptable, IMO.
Of course, there's still no guarantee that clients will actually follow this guidance. One way to explicitly test/verify that they do accept unknown keys would be by randomly serving a key with an unknown version / algorithm / packet / subpacket / etc, either for a given test address or for any address (with a User ID saying "test key - ignore" or something). Such explicit ossification prevention has become somewhat popular in the TLS world, I believe.
For the v1 API, I could see serving a v4 PQC key as being potentially problematic, and one might prefer to serve a v4 ECC key instead, and reserve the PQC key for the v2 API. But, at that point, why not serve a v6 PQC key in the v2 API and a v4 ECC key in the v1 API? In other words, what purpose does the v4 PQC key serve, in this context?
If there is some reason, we could add some signalling to the v1 API of course, but at that point I think it would be better if the new clients (that would use the signalling) would just use the v2 API, instead. (Nevertheless, we might still need some additional language to only serve a key with certain algorithms in the v1 API, if available, which would be a bit messy but should be possible, if needed.)
Edited by Daniel HuigensIn my opinion, we shouldn't limit the types of incompatibilities that are "acceptable" up front. For example, let's say there comes some additional (non-backwards-compatible) feature after PQC, and someone generates two PQC keys, one with the feature and one without. Then "PQC-ready client cannot decode or verify [one of the] PQC certificate[s]" should be acceptable, IMO.
Here the idea would be that the key with the additional feature is also tagged with a corresponding keyword. Then an incompatible receiving client could see that there is some tag to one certificate he cannot interpret and thus account a decoding error when parsing that certificate to the corresponding unsupported feature. But when for a certificate only known tags are present the associated features of which are supported by the client, then I think it makes sense to draw attention to a decoding or usage error in connection with that certificate. This is because this might in indicate incompatible / buggy implementations. With all the new features like v6, PQC and more to come, I would see the probability of such incompatibilities arising now and then to be quite high. Thus it might make sense to be able to be able to determine the certificates in a key block which should work and those that are not supported.
Of course, the keyword tagging should be done by the KOO implementation itself in order to ensure consistency. Then there only needed to be a brief documentation indicating which keyword indicates which OpenPGP version / extension. This should be quite easily supported by receiving clients, as they only need to define a hard-coded set of supported features against which they check each key in the key block.
Of course, there's still no guarantee that clients will actually follow this guidance. One way to explicitly test/verify that they do accept unknown keys would be by randomly serving a key with an unknown version / algorithm / packet / subpacket / etc, either for a given test address or for any address (with a User ID saying "test key - ignore" or something). Such explicit ossification prevention has become somewhat popular in the TLS world, I believe.
As explained above, I am not sure that it would be enough if all clients could gracefully ignore unknown versions, algorithms, packets etc., since there might also be incompatibilities regarding a single feature among supporting clients which ideally should be possible to track down as easily as possible.
For the v1 API, I could see serving a v4 PQC key as being potentially problematic, and one might prefer to serve a v4 ECC key instead, and reserve the PQC key for the v2 API. But, at that point, why not serve a v6 PQC key in the v2 API and a v4 ECC key in the v1 API? In other words, what purpose does the v4 PQC key serve, in this context?
I think it is currently not clear to which degree v4 PQC will be needed. That will most likely depend on availability of v6 support in all authoritative OpenPGP implementations. Ideally, v4 PQC will not be needed, but currently it seems that there might be implementations which will not support it at all.
I assume the v1 API only allows returning a single certificate anyway, is that right? In that case it is probably necessary for the user to decide which of his keys he would like to have served under the v1 API.
Edited by Falko StrenzkeAs explained above, I am not sure that it would be enough if all clients could gracefully ignore unknown versions, algorithms, packets etc., since there might also be incompatibilities regarding a single feature among supporting clients which ideally should be possible to track down as easily as possible.
IMO, these incompatibilities should ideally be detected before the feature is used by keys in the wild, using things like the interoperability test suite. If an incompatibility does end up in the wild, there's not much point bothering the user with it IMO, since they likely can't do much about it and will probably want to use the other keys they do understand, so we might as well do so automatically.
I assume the v1 API only allows returning a single certificate anyway, is that right?
Yes, indeed.
In that case it is probably necessary for the user to decide which of his keys he would like to have served under the v1 API.
I would prefer not to make them choose explicitly, if it's possible to avoid it, and just pick the newest key that clients are likely to understand. That way the user doesn't have to think or know about what clients of the v1 API likely support, etc.