[SPIKE] Investigate ingestion of significant vulnerability data into ElasticSearch

As a follow up to #352665 (closed), we need to investigate the engineering complexity of ingesting significant quantities of vulnerability data into ElasticSearch, parsing it and ensuring that ElasticSearch remains functional and performant with our strategy of doing this.

Expected Outcomes

  1. Determine the engineering complexity involved in vulnerability ingestion into ElasticSearch. Some questions to consider:
    1. Does ElasticSearch have an API we want to push vulnerability information into one at a time, or in bulk?
    2. Do we need to write to a file and ingest that into ElasticSearch some different way?
    3. What is the way GitLab currently ingests issue data into ElasticSearch for the Advanced Search capability, and can we mimic it?
    4. If the data needs to be ingested asynchronously, what kind of delays should we expect?
  2. Investigate the costs related to our use of ElasticSearch.
    1. What is our expected data domain size in ElasticSearch? What kind of cost impact might this have on GitLab.com? (Bearing in mind we do already have an ElasticSearch deployment)
  3. What will the impact be on our self-managed users?
    1. Can we make this mirror the existing Issue Advanced Search opt-in mechanism.
    2. How will the affect users already opted into Advanced Search? Will they need to significantly expand their ElasticSearch clusters?

Timebox: 4 Days