Question: Isn't IterablePubkeys for inclusion checks optimising for outdated RAM requirements?

The whitelist-policy is using IterablePubkeys for an inclusion check, arguing with it being for "efficiently loading from large files.".

Checking inclusion sequentially is not efficient for large lists. My intuition tells me that I probably would want to load the file once into a set and check against that set. RAM is cheap and whitelists of 100k keys fit comfortably into even small scale RAM.

If this is for blacklists, yes, spammers can send every message with a new pubkey but to add all those to a blacklist is probably not the way to go neither.

Am I missing something? I'd need the whitelist for maybe 200k pubkeys and would really prefer to hand the policy a Set in memory. Should I not bother baking it into whitelist-policy and create a whiteset-policy instead?

Background: My webOfTrust policy really is a whitelist policy with multiple whitelists as discussed in #2 (closed).