Skip to content
Snippets Groups Projects

Draft: KEP-2: Support for multiple keys

Open Daniel Huigens requested to merge kep-2 into main
2 unresolved threads

This KEP is meant as an alternative proposal to KEP-1, and at most one of them should be accepted.

KEP-2 has a larger scope than KEP-1, and partially reimagines the way KOO stores, manages, and serves keys. This means that implementing this KEP would be more effort than implementing KEP-1. However, it aims to set us up not just for the transition between v4 and v6, but also to post-quantum keys, and generally provides a more flexible base. (Nevertheless, policy decisions such as limitations on the number of keys may still be agreed upon, if desirable.)

Merge request reports

Approval is optional
Merge blocked: 1 check failed
Merge request must not be draft.

Merge details

  • The source branch is 98 commits behind the target branch.
  • 1 commit and 1 merge commit will be added to main.
  • Source branch will be deleted.

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Daniel Huigens added 1 commit

    added 1 commit

    • dc53f229 - KEP-2: Support for multiple keys

    Compare with previous version

    • Thanks for writing this up. For now I have just one remark about the terminology.

      I really dislike the term key block. First, Werner really likes it. Second, I think it is very imprecise. From the text I gather you mean a non-empty sequence of Transferable Public Keys (TPKs) in an armor container. But if you look at the GnuPG source, key block means TPK, armored or not, serialized or in core.

      I have no good alternative to offer, but I'd avoid using the term for TPK (which there is some precedence for), and I'd even more against using it for something else than the established meaning.

    • Alright. I'll try to think of an alternate term, if anyone else has suggestions it'd be welcome too :)

    • Maybe simply "armored keys" is sufficient? It's still a bit ambiguous, e.g. it could also refer to multiple armored containers each containing a single key, but perhaps if we clearly define what we mean it would be fine. Or we can just spell it out every time.

    • Isn't "keyring" the accepted generic term for "a collection of keys that are treated as a unit" ?

    • In my mind, a keyring is a local collection of all the keys you have, including your own private keys and the public keys of all your contacts. It might be a bit confusing if we use it to describe the keys of one person?

      (I think it also still has the same problem Justus mentioned which is that it doesn't tell you anything about how it's encoded, i.e. armored or not, etc.)

      Edited by Daniel Huigens
    • We already have "secret keyrings" and "public keyrings" as distinct objects; file emits "PGP/PGP key public ring" when given a TPK file; there is plenty of precedent in the wild. If we want to clarify this particular usage of the term we could qualify it, say "personal keyring" or "published keyring" or some such?

      And I think encoding-independent terminology is a good thing. Wire formats are an implementation detail that should not normally be displayed in a UI, for example.

    • not sure about "keyring", it's pretty overloaded already and used with different semantics in different APIs. as a particular example, BouncyCastle uses "keyring" in the sense of "certificate" (e.g. PgpPublicKeyRing)

    • @valodim ok, then keeping entirely clear of overloaded terminology, what about "bulletin" in the sense of "terse announcement"?

      Edited by Andrew Gallagher
    • Please register or sign in to reply
  • Vincent Breitmoser
  • Daniel Huigens added 1 commit

    added 1 commit

    • eaee3e52 - KEP-2: Support for multiple keys

    Compare with previous version

    • Thanks, Daniel, for this proposal. I am especially interested in the how this exactly is going to support the PQC transition. As far as I currently understand, the querying API is agnostic of the PQC aspect of keys. The current PQC draft there are v4 (encryption) and v6 PQC keys (encryption and signature) proposed. From the current proposal it seems that

      • API v1 would return v4 PQC keys,
      • API v2 would return v4 and v6 PQC keys

      whenever such keys are uploaded under the respective e-mail address / fingerprint. Is that really sufficient for the transition? It is known that clients at least choke on unknown key and signature versions. On this background I think the main question is: will clients that implement the v1 and v2 API correctly for traditional v4 and v6 keys but don't implement PQC be able to deal with "key blocks" that contain PQC keys? Will they, in case that a certain key block contains one certificate with only traditional keys and one with PQC keys be able to process at least the one with the traditional keys? Will the client output an error message for the certificate with unknown algorithm (and signature) versions?

      Would it maybe make sense to extend at least the v2 protocol (potentially also v1) to (optionally?) tag the keys within a key block with certain predefined keywords? One such keyword could then be "pqc". The default keys which are expected to be supported by all clients might be tagged with a keyword "default" as well. This might prevent the case where one receives unexpected results due to errors emitted when processing a part of the certificates contained in a key block without knowing whether this was an acceptable error condition (e.g., non-PQC client rejects PQC certificate) or a non-acceptable one (e.g., PQC-ready client cannot decode or verify PQC certificate).

    • Hi Falko, thanks for the comments. To be perfectly honest, I would prefer to avoid the requirement for such signalling in the v2 API, if possible, by explicitly saying up front that clients should be able to deal with a key block which contains some keys which they can parse and some keys which they can't. There's some prose for how clients should deal with that here.

      In my opinion, we shouldn't limit the types of incompatibilities that are "acceptable" up front. For example, let's say there comes some additional (non-backwards-compatible) feature after PQC, and someone generates two PQC keys, one with the feature and one without. Then "PQC-ready client cannot decode or verify [one of the] PQC certificate[s]" should be acceptable, IMO.

      Of course, there's still no guarantee that clients will actually follow this guidance. One way to explicitly test/verify that they do accept unknown keys would be by randomly serving a key with an unknown version / algorithm / packet / subpacket / etc, either for a given test address or for any address (with a User ID saying "test key - ignore" or something). Such explicit ossification prevention has become somewhat popular in the TLS world, I believe.


      For the v1 API, I could see serving a v4 PQC key as being potentially problematic, and one might prefer to serve a v4 ECC key instead, and reserve the PQC key for the v2 API. But, at that point, why not serve a v6 PQC key in the v2 API and a v4 ECC key in the v1 API? In other words, what purpose does the v4 PQC key serve, in this context?

      If there is some reason, we could add some signalling to the v1 API of course, but at that point I think it would be better if the new clients (that would use the signalling) would just use the v2 API, instead. (Nevertheless, we might still need some additional language to only serve a key with certain algorithms in the v1 API, if available, which would be a bit messy but should be possible, if needed.)

      Edited by Daniel Huigens
    • In my opinion, we shouldn't limit the types of incompatibilities that are "acceptable" up front. For example, let's say there comes some additional (non-backwards-compatible) feature after PQC, and someone generates two PQC keys, one with the feature and one without. Then "PQC-ready client cannot decode or verify [one of the] PQC certificate[s]" should be acceptable, IMO.

      Here the idea would be that the key with the additional feature is also tagged with a corresponding keyword. Then an incompatible receiving client could see that there is some tag to one certificate he cannot interpret and thus account a decoding error when parsing that certificate to the corresponding unsupported feature. But when for a certificate only known tags are present the associated features of which are supported by the client, then I think it makes sense to draw attention to a decoding or usage error in connection with that certificate. This is because this might in indicate incompatible / buggy implementations. With all the new features like v6, PQC and more to come, I would see the probability of such incompatibilities arising now and then to be quite high. Thus it might make sense to be able to be able to determine the certificates in a key block which should work and those that are not supported.

      Of course, the keyword tagging should be done by the KOO implementation itself in order to ensure consistency. Then there only needed to be a brief documentation indicating which keyword indicates which OpenPGP version / extension. This should be quite easily supported by receiving clients, as they only need to define a hard-coded set of supported features against which they check each key in the key block.

      Of course, there's still no guarantee that clients will actually follow this guidance. One way to explicitly test/verify that they do accept unknown keys would be by randomly serving a key with an unknown version / algorithm / packet / subpacket / etc, either for a given test address or for any address (with a User ID saying "test key - ignore" or something). Such explicit ossification prevention has become somewhat popular in the TLS world, I believe.

      As explained above, I am not sure that it would be enough if all clients could gracefully ignore unknown versions, algorithms, packets etc., since there might also be incompatibilities regarding a single feature among supporting clients which ideally should be possible to track down as easily as possible.

      For the v1 API, I could see serving a v4 PQC key as being potentially problematic, and one might prefer to serve a v4 ECC key instead, and reserve the PQC key for the v2 API. But, at that point, why not serve a v6 PQC key in the v2 API and a v4 ECC key in the v1 API? In other words, what purpose does the v4 PQC key serve, in this context?

      I think it is currently not clear to which degree v4 PQC will be needed. That will most likely depend on availability of v6 support in all authoritative OpenPGP implementations. Ideally, v4 PQC will not be needed, but currently it seems that there might be implementations which will not support it at all.

      I assume the v1 API only allows returning a single certificate anyway, is that right? In that case it is probably necessary for the user to decide which of his keys he would like to have served under the v1 API.

      Edited by Falko Strenzke
    • There is no excuse for clients not ignoring unknown material. Please don't complicate the API to accommodate broken clients.

    • As explained above, I am not sure that it would be enough if all clients could gracefully ignore unknown versions, algorithms, packets etc., since there might also be incompatibilities regarding a single feature among supporting clients which ideally should be possible to track down as easily as possible.

      IMO, these incompatibilities should ideally be detected before the feature is used by keys in the wild, using things like the interoperability test suite. If an incompatibility does end up in the wild, there's not much point bothering the user with it IMO, since they likely can't do much about it and will probably want to use the other keys they do understand, so we might as well do so automatically.

      I assume the v1 API only allows returning a single certificate anyway, is that right?

      Yes, indeed.

      In that case it is probably necessary for the user to decide which of his keys he would like to have served under the v1 API.

      I would prefer not to make them choose explicitly, if it's possible to avoid it, and just pick the newest key that clients are likely to understand. That way the user doesn't have to think or know about what clients of the v1 API likely support, etc.

    • OK, agreed. In any case you guys have the practical experience with these things, my ideas might be a bit theoretical in the end.

    • Please register or sign in to reply
  • Please register or sign in to reply
    Loading