Commit 5acbcfd5 authored by Israel Weeks's avatar Israel Weeks 💬

Merge branch 'master' into 2274-build-refunds-model-in-dbt

parents 35513567 fc81f497
......@@ -112,10 +112,17 @@ python_black:
- pip install black
- black --check .
python_mypy:
<<: *python_check
script:
- pip install mypy
- mypy extract/ --ignore-missing-imports
python_pylint:
<<: *python_check
script:
- pylint ../analytics/ --ignore=dags --disable=C --disable=W1203 --disable=W1202 --reports=y --exit-zero
when: manual
python_complexity:
<<: *python_check
......
......@@ -10,7 +10,7 @@ Goal: To help bring you, our new data team member, up to speed in the GitLab Dat
- [ ] Manager: Upgrade Periscope user to editor (after they've logged in via Okta)
- [ ] Manager: Invite to `data-team` channel on Slack
- [ ] Manager: Update codeowners file in the handbook to include the new team member
- [ ] Manager: Add to daily Geekbot standup
- [ ] Manager: Add to daily Geekbot standup (send `dashboard` to Geekbot on slack, click into a particular standup in the web UI, add via Manage button)
- [ ] Manager: Add to Snowflake [following Handbook Process](https://about.gitlab.com/handbook/business-ops/data-team/#warehouse-access)
- Scratch schema will follow be your Snowflake username followed by `_scratch` i.e. `jsmith_scratch`
- [ ] Manager: Invite to SheetLoad folder in gdrive
......@@ -105,7 +105,7 @@ You can use `Command + Option + L` to format your file.
- [ ] Refer to http://jinja.pocoo.org/docs/2.10/templates/ as a resource for understanding Jinja which is used extensively in dbt.
- [ ] [This article](https://blog.fishtownanalytics.com/what-exactly-is-dbt-47ba57309068) talks about the what/why.
- [ ] [This introduction](https://docs.getdbt.com/docs/introduction) should help get you understand what dbt is.
[ ] [This podcast](https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81/) is a general walkthrough of dbt/interview with its creator, Drew Banin.
- [ ] [This podcast](https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81/) is a general walkthrough of dbt/interview with its creator, Drew Banin.
- [ ] Read [how we use dbt](https://about.gitlab.com/handbook/business-ops/data-team/#-transformation) and our [SQL Style Guide](https://about.gitlab.com/handbook/business-ops/data-team/sql-style-guide/).
- [ ] Watch [video](https://drive.google.com/file/d/1ZuieqqejDd2HkvhEZeOPd6f2Vd5JWyUn/view) of Taylor introducing Chase to dbt.
- [ ] Peruse the [Official Docs](https://docs.getdbt.com).
......
Closes
List the tables added/changed below, and then run the `pgp_test` CI job.
When running the manual CI job, include the `MANIFEST_NAME` variable and input the name of the db (i.e. `gitlab_com`)
#### Tables Changed/Added
* [ ] List
#### PGP Test CI job passed?
* [ ] List
......@@ -18,6 +18,11 @@ Example: You might be looking at the count of opportunities before and after, if
Please include links to any related MRs and/or issues.
## Stakeholder Checklist (if applicable)
- [ ] Does the dbt model change provide the requested data?
- [ ] Does the dbt model change provide accurate data?
## Submitter Checklist
- [ ] This MR follows the coding conventions laid out in the [style guide](https://about.gitlab.com/handbook/business-ops/data-team/sql-style-guide/)
......
......@@ -16,8 +16,13 @@ If none, please include a description
**Editor Slack Handle**: @`handle`
### Submitter Checklist
### Stakeholder Checklist (if applicable)
* Review Items
* [ ] Does the dashboard provide the data requested?
* [ ] Is the data in the dashboard correct?
### Submitter Checklist
* Review Items
* [ ] SQL formatted using [GitLab Style Guide](https://about.gitlab.com/handbook/business-ops/data-team/sql-style-guide/)
* [ ] Python / R reviewed for content, formatting, and necessity
......@@ -36,7 +41,7 @@ If none, please include a description
* [ ] Legend is clear
* [ ] Text Tile for "What am I looking at?" and more detailed information, leveraging hyperlinks instead of URLs
* [ ] Tooltips are used where appropriate and show relevant values
* [ ] Request approval from stakeholder/business partner if applicable
* [ ] Request approval from stakeholder if applicable
* [ ] Assign to reviewer on the data team
* Housekeeping
......
......@@ -19,6 +19,7 @@ help:
++ Python Related ++ \n \
data-image: attaches to a shell in the data-image and mounts the repo for testing. \n \
lint: Runs a linter (Black) over the whole repo. \n \
mypy: Runs a type-checker in the extract dir. \n \
pylint: Runs the pylint checker over the whole repo. Does not check for code formatting, only errors/warnings. \n \
radon: Runs a cyclomatic complexity checker and shows anything with less than an A rating. \n \
xenon: Runs a cyclomatic complexity checker that will throw a non-zero exit code if the criteria aren't met. \n \
......@@ -64,6 +65,10 @@ lint:
@echo "Linting the repo..."
@black .
mypy:
@echo "Running mypy..."
@mypy extract/ --ignore-missing-imports
pylint:
@echo "Running pylint..."
@pylint ../analytics/ --ignore=dags --disable=C --disable=W1203 --disable=W1202 --reports=y
......
[![pipeline status](https://gitlab.com/gitlab-data/analytics/badges/master/pipeline.svg)](https://gitlab.com/gitlab-data/analytics/commits/master)
## Quick Links
* Data Team Handbook - https://about.gitlab.com/handbook/business-ops/data-team/#data-team-handbook
* dbt Docs - https://gitlab-data.gitlab.io/analytics/dbt/snowflake/#!/overview
......@@ -8,23 +6,13 @@
* [Email Address to Share Sheetloaded Doc with](https://docs.google.com/document/d/1m8kky3DPv2yvH63W4NDYFURrhUwRiMKHI-himxn1r7k/edit?usp=sharing) (GitLab Internal)
## Media
* [How Data Teams Do More With Less By Adopting Software Engineering Best Practices - Thomas's talk at the 2018 DataEngConf in NYC](https://www.youtube.com/watch?v=eu623QBwakc)
* [Taylor explains dbt](https://drive.google.com/open?id=1ZuieqqejDd2HkvhEZeOPd6f2Vd5JWyUn) (GitLab internal)
* [dbt docs intro with Drew Banin from Fishtown Analytics](https://www.youtube.com/watch?v=bqIBNvA9xjo)
* [Tom Cooney explains Zendesk](https://drive.google.com/open?id=1oExE1ZM5IkXcq1hJIPouxlXSiafhRRua) (GitLab internal)
* [Luca Williams explains Customer Success Dashboards](https://drive.google.com/open?id=1FsgvELNmQ0ADEC1hFEKhWNA1OnH-INOJ) (GitLab internal)
* [Art Nasser explains Netsuite and Campaign Data](https://drive.google.com/open?id=1KUMa8zICI9_jQDqdyN7mGSWSLdw97h5-) (GitLab internal)
* [Courtland Smith explains Marketing Dashboard needs](https://drive.google.com/open?id=1bjKWRdfUgcn0GfyB2rS3qdr_8nbRYAZu) (GitLab internal)
* GitLab blog post about Meltano - https://news.ycombinator.com/item?id=17667399
* Livestream chat with Sid and HN user slap_shot - https://www.youtube.com/watch?v=F8tEDq3K_pE
* Follow-up blog post to original Meltano post - https://about.gitlab.com/2018/08/07/meltano-follow-up/
* Data Source Overviews:
* [Pings](https://drive.google.com/file/d/1S8lNyMdC3oXfCdWhY69Lx-tUVdL9SPFe/view)
* [Salesforce](https://youtu.be/KwG3ylzWWWo)
* [Netsuite](https://www.youtube.com/watch?v=u2329sQrWDY)
* Taylor and Israel pair on a Lost MRR Dashboard
* [Part 1](https://www.youtube.com/watch?v=WuIcnpuS2Mg)
* [Part 2](https://youtu.be/HIlDH5gaL3M)
* [Netsuite and Campaign Data](https://drive.google.com/open?id=1KUMa8zICI9_jQDqdyN7mGSWSLdw97h5-)
* [Zendesk](https://drive.google.com/open?id=1oExE1ZM5IkXcq1hJIPouxlXSiafhRRua)
* [Customer Success Dashboards](https://drive.google.com/open?id=1FsgvELNmQ0ADEC1hFEKhWNA1OnH-INOJ)
## Contributing to the Data Team project
......
......@@ -142,7 +142,7 @@ dbt_source_cmd = f"""
cd analytics/transform/snowflake-dbt/ &&
export snowflake_load_database="RAW" &&
dbt deps --profiles-dir profile &&
dbt source snapshot-freshness --profiles-dir profile
true # dbt source snapshot-freshness --profiles-dir profile --target docs
"""
dbt_source_freshness = KubernetesPodOperator(
**gitlab_defaults,
......
......@@ -47,7 +47,7 @@ dbt_snapshot_cmd = f"""
"""
dbt_snapshot = KubernetesPodOperator(
**gitlab_defaults,
image="registry.gitlab.com/gitlab-data/data-image/dbt-image:63-upgrade-dbt-to-0-14",
image="registry.gitlab.com/gitlab-data/data-image/dbt-image:latest",
task_id="dbt-snapshots",
name="dbt-snapshots",
secrets=[
......
......@@ -58,7 +58,7 @@ if __name__ == "__main__":
)
# Custom Reports
report_mapping = dict(id_employee_number_mapping="423")
report_mapping = dict(id_employee_number_mapping="498")
for key, value in report_mapping.items():
logging.info(f"Querying for report number {value} into table {key}...")
......
......@@ -19,3 +19,15 @@ sheetload:
only:
- merge_requests
when: manual
pgp_test:
<<: *extract_definition
stage: extract
script:
- echo $MANIFEST_NAME
- cd extract/postgres_pipeline/postgres_pipeline/
- python main.py tap ../manifests/${MANIFEST_NAME}_db_manifest.yaml --load_type test
only:
- merge_requests
- $MANIFEST_NAME
when: manual
......@@ -23,6 +23,10 @@ Fully sync (backfilling):
* There are two conditions that would trigger a full backfill: 1) The table doesn't exist in snowflake or 2) The schema has changed (for instance a column was added or dropped or renamed even).
* `pgp` will look at the max ID of the target table and backfill in million ID increments, since, at GitLab, every table implemented is guaranteed to have an ID or some primary key
Test:
* When a table has changed or is new (including SCD tables) `pgp` will try to load 1 million rows of that table to ensure that it can be loaded. This will catch the majority of data quality problems.
Validation (data quality check):
* _Documentation pending feature completion_
......
......@@ -5,6 +5,62 @@ connection_info:
database: GITLAB_COM_DB_NAME
port: PG_PORT
tables:
ci_builds:
import_db: GITLAB_DB
import_query: >
SELECT id
, status
, finished_at
, trace
, created_at
, updated_at
, started_at
, runner_id
, coverage
, commit_id
, commands
, name
, options
, allow_failure
, stage
, trigger_request_id
, stage_idx
, tag
, ref
, user_id
, type
, target_url
, description
, artifacts_file
, project_id
, artifacts_metadata
, erased_by_id
, erased_at
, CASE WHEN artifacts_expire_at > '2262-01-01' THEN '2262-01-01' ELSE artifacts_expire_at END AS artifacts_expire_at
, environment
, artifacts_size
, "when"
, yaml_variables
, queued_at
, token
, lock_version
, coverage_regex
, auto_canceled_by_id
, retried
, stage_id
, artifacts_file_store
, artifacts_metadata_store
, protected
, failure_reason
, scheduled_at
, token_encrypted
, upstream_pipeline_id
FROM ci_builds
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
AND '{EXECUTION_DATE}'::timestamp
export_schema: 'gitlab_dotcom'
export_table: ci_builds
export_table_primary_key: id
approvals:
import_db: GITLAB_DB
import_query: >
......@@ -231,6 +287,100 @@ tables:
export_schema: 'gitlab_dotcom'
export_table: ci_build_trace_section_names
export_table_primary_key: id
ci_stages:
import_db: GITLAB_DB
import_query: >
SELECT id
, project_id
, pipeline_id
, created_at
, updated_at
, name
, status
, lock_version
, position
FROM ci_stages
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
AND '{EXECUTION_DATE}'::timestamp
export_schema: 'gitlab_dotcom'
export_table: ci_stages
export_table_primary_key: id
ci_trigger_requests:
import_db: GITLAB_DB
import_query: >
SELECT id
, trigger_id
, variables
, created_at
, updated_at
, commit_id
FROM ci_trigger_requests
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
AND '{EXECUTION_DATE}'::timestamp
export_schema: 'gitlab_dotcom'
export_table: ci_trigger_requests
export_table_primary_key: id
ci_triggers:
import_db: GITLAB_DB
import_query: >
SELECT id
, token
, created_at
, updated_at
, project_id
, owner_id
, description
FROM ci_triggers
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
AND '{EXECUTION_DATE}'::timestamp
export_schema: 'gitlab_dotcom'
export_table: ci_triggers
export_table_primary_key: id
ci_variables:
import_db: GITLAB_DB
import_query: >
SELECT id
, key
, value
, project_id
, protected
, environment_scope
, masked
, variable_type
FROM ci_variables
export_schema: 'gitlab_dotcom'
export_table: ci_variables
export_table_primary_key: id
ci_pipelines:
import_db: GITLAB_DB
import_query: >
SELECT id
, created_at
, updated_at
, tag
, yaml_errors
, committed_at
, project_id
, status
, started_at
, finished_at
, duration
, user_id
, lock_version
, auto_canceled_by_id
, pipeline_schedule_id
, source
, config_source
, protected
, failure_reason
, iid
, merge_request_id
FROM ci_pipelines
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
AND '{EXECUTION_DATE}'::timestamp
export_schema: 'gitlab_com'
export_table: 'ci_pipelines'
export_table_primary_key: id
epic_issues:
import_db: GITLAB_DB
import_query: >
......@@ -611,6 +761,23 @@ tables:
export_schema: 'gitlab_com'
export_table: 'namespaces'
export_table_primary_key: id
namespace_root_storage_statistics:
import_db: GITLAB_DB
import_query: >
SELECT namespace_id
, repository_size
, lfs_objects_size
, wiki_size
, build_artifacts_size
, storage_size
, packages_size
, updated_at
FROM namespace_root_storage_statistics
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
AND '{EXECUTION_DATE}'::timestamp
export_schema: 'gitlab_com'
export_table: 'namespace_root_storage_statistics'
export_table_primary_key: namespace_id
notes:
import_db: GITLAB_DB
import_query: >
......@@ -703,8 +870,8 @@ tables:
import_query: >
SELECT id
, project_id
, TO_CHAR(created_at, 'YYYY-MM-DD HH:MI:SS') AS created_at
, TO_CHAR(updated_at, 'YYYY-MM-DD HH:MI:SS') AS updated_at
, created_at
, updated_at
, enabled
FROM project_auto_devops
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
......@@ -716,8 +883,8 @@ tables:
import_db: GITLAB_DB
import_query: >
SELECT id
, TO_CHAR(created_at, 'YYYY-MM-DD HH:MI:SS') AS created_at
, TO_CHAR(updated_at, 'YYYY-MM-DD HH:MI:SS') AS updated_at
, created_at
, updated_at
, project_id
, key
, value
......@@ -737,8 +904,8 @@ tables:
, wiki_access_level
, snippets_access_level
, builds_access_level
, TO_CHAR(created_at, 'YYYY-MM-DD HH:MI:SS') AS created_at
, TO_CHAR(updated_at, 'YYYY-MM-DD HH:MI:SS') AS updated_at
, created_at
, updated_at
, repository_access_level
FROM project_features
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
......@@ -752,10 +919,10 @@ tables:
SELECT id
, project_id
, group_id
, TO_CHAR(created_at, 'YYYY-MM-DD HH:MI:SS') AS created_at
, TO_CHAR(updated_at, 'YYYY-MM-DD HH:MI:SS') AS updated_at
, created_at
, updated_at
, group_access
, TO_CHAR(expires_at, 'YYYY-MM-DD HH:MI:SS') AS expires_at
, expires_at
FROM project_group_links
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
AND '{EXECUTION_DATE}'::timestamp
......@@ -777,9 +944,9 @@ tables:
SELECT id
, project_id
, retry_count
, TO_CHAR(last_update_started_at, 'YYYY-MM-DD HH:MI:SS') AS last_update_started_at
, TO_CHAR(last_update_scheduled_at, 'YYYY-MM-DD HH:MI:SS') AS last_update_scheduled_at
, TO_CHAR(next_execution_timestamp, 'YYYY-MM-DD HH:MI:SS') AS next_execution_timestamp
, last_update_started_at
, last_update_scheduled_at
, next_execution_timestamp
FROM project_mirror_data
export_schema: 'gitlab_com'
export_table: 'project_mirror_data'
......@@ -886,8 +1053,8 @@ tables:
, subscribable_id
, subscribable_type
, subscribed
, TO_CHAR(created_at, 'YYYY-MM-DD HH:MI:SS') AS created_at
, TO_CHAR(updated_at, 'YYYY-MM-DD HH:MI:SS') AS updated_at
, created_at
, updated_at
, project_id
FROM subscriptions
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
......@@ -984,59 +1151,3 @@ tables:
export_schema: 'gitlab_dotcom'
export_table: ci_builds_runner_session
export_table_primary_key: build_id
ci_builds:
import_db: GITLAB_DB
import_query: >
SELECT id
, status
, finished_at
, trace
, created_at
, updated_at
, started_at
, runner_id
, coverage
, commit_id
, commands
, name
, options
, allow_failure
, stage
, trigger_request_id
, stage_idx
, tag
, ref
, user_id
, type
, target_url
, description
, artifacts_file
, project_id
, artifacts_metadata
, erased_by_id
, erased_at
, artifacts_expire_at
, environment
, artifacts_size
, "when"
, yaml_variables
, queued_at
, token
, lock_version
, coverage_regex
, auto_canceled_by_id
, retried
, stage_id
, artifacts_file_store
, artifacts_metadata_store
, protected
, failure_reason
, scheduled_at
, token_encrypted
, upstream_pipeline_id
FROM ci_builds
WHERE updated_at BETWEEN '{EXECUTION_DATE}'::timestamp - interval '{HOURS} hours'
AND '{EXECUTION_DATE}'::timestamp
export_schema: 'gitlab_dotcom'
export_table: ci_builds
export_table_primary_key: id
......@@ -13,7 +13,7 @@ from utils import (
id_query_generator,
manifest_reader,
)
from validation import *
from validation import get_comparison_results
SCHEMA = "tap_postgres"
......@@ -83,6 +83,7 @@ def sync_incremental_ids(
raw_query = table_dict["import_query"]
additional_filtering = table_dict.get("additional_filtering", "")
primary_key = table_dict["export_table_primary_key"]
if "{EXECUTION_DATE}" not in raw_query:
logging.info(f"Table {table} does not need sync processing.")
return False
......@@ -93,16 +94,11 @@ def sync_incremental_ids(
return False
id_queries = id_query_generator(
source_engine,
table_dict["export_table_primary_key"],
raw_query,
target_engine,
table,
table_name,
source_engine, primary_key, raw_query, target_engine, table, table_name
)
# Iterate through the generated queries
for query in id_queries:
filtered_query = f"{query} {additional_filtering} ORDER BY id"
filtered_query = f"{query} {additional_filtering} ORDER BY {primary_key}"
logging.info(filtered_query)
chunk_and_upload(filtered_query, source_engine, target_engine, table_name)
return True
......@@ -177,7 +173,35 @@ def validate_ids(
return True
def main(file_path: str, load_type: str = None) -> None:
def test_new_tables(
source_engine: Engine,
target_engine: Engine,
table: str,
table_dict: Dict[Any, Any],
table_name: str,
) -> bool:
"""
Load a set amount of rows for each new table in the manifest. A table is
considered new if it doesn't already exist in the data warehouse.
"""
raw_query = table_dict["import_query"].split("WHERE")[0]
additional_filtering = table_dict.get("additional_filtering", "")
primary_key = table_dict["export_table_primary_key"]
# Figure out if the table exists
if "_TEMP" != table_name[-5:] and not target_engine.has_table(f"{table_name}_TEMP"):
logging.info(f"Table {table} already exists and won't be tested.")
return False
# If the table doesn't exist, load 1 million rows (or whatever the table has)
query = f"{raw_query} WHERE {primary_key} IS NOT NULL {additional_filtering} LIMIT 1000000"
chunk_and_upload(query, source_engine, target_engine, table_name)
return True
def main(file_path: str, load_type: str) -> None:
"""
Read data from a postgres DB and upload it directly to Snowflake.
"""
......@@ -194,6 +218,7 @@ def main(file_path: str, load_type: str = None) -> None:
"incremental": load_incremental,
"scd": load_scd,
"sync": sync_incremental_ids,
"test": test_new_tables,
"validate": validate_ids,
}
......
......@@ -47,7 +47,7 @@ def table_cleanup():
SNOWFLAKE_ENGINE.dispose()
class TestTapPostgres:
class TestPostgresPipeline:
def test_query_results_generator(self):
"""
Test loading a dataframe by querying a known table.
......
......@@ -157,7 +157,8 @@ def id_query_generator(
target_table: str,
) -> Generator[str, Any, None]:
"""
This function syncs a database with Snowflake based on IDs for each table.
This function syncs a database with Snowflake based on the user-defined
primary keys for each table.
Gets the diff between the IDs that exist in the DB vs the DW, loads any rows
with IDs that are missing from the DW.
......@@ -178,7 +179,9 @@ def id_query_generator(
# Get the max ID from the source DB
logging.info(f"Getting max ID from source_table: {source_table}")
max_source_id_query = f"SELECT MAX({primary_key}) as id FROM {source_table}"
max_source_id_query = (
f"SELECT MAX({primary_key}) as {primary_key} FROM {source_table}"
)
try:
max_source_id_results = query_results_generator(
max_source_id_query, postgres_engine
......@@ -192,7 +195,7 @@ def id_query_generator(
for id_pair in range_generator(max_target_id, max_source_id):
id_range_query = (
"".join(raw_query.lower().split("where")[0])
+ f"WHERE id BETWEEN {id_pair[0]} AND {id_pair[1]}"
+ f"WHERE {primary_key} BETWEEN {id_pair[0]} AND {id_pair[1]}"
)
logging.info(f"ID Range: {id_pair}")
yield id_range_query
......
test_sheet.test_sheet
calendar.calendar
employee_location_factor.employee_location_factor
google_referrals.google_referrals
headcount.headcount
......
......@@ -502,6 +502,12 @@ roles:
warehouses:
- reporting
- jstark:
warehouses:
- engineer_xs
- engineer_xl
- loading
- reporting
- jurbanc:
warehouses:
- reporting
......@@ -574,6 +580,10 @@ roles:
warehouses:
- target_snowflake
- tcarter:
warehouses:
- reporting
- tlapiana:
warehouses:
- engineer_xs
......@@ -683,6 +693,12 @@ users:
- securityadmin
- sysadmin
- jstark:
can_login: yes
member_of:
- analyst_core
- jstark
- jurbanc:
can_login: yes
member_of:
......@@ -777,6 +793,13 @@ users:
- stitch
- loader
- tcarter:
can_login: yes
member_of:
- securityadmin
- sysadmin
- tcarter
- target_snowflake:
can_login: yes
member_of:
......
......@@ -90,7 +90,7 @@ gitlab-org/gitlab-services/design.gitlab.com,4456656
gitlab-org/customers-gitlab-com,2670515
gitlab-com/license-gitlab-com,6457868
gitlab-com/version-gitlab-com,6491770
charts/auto-deploy-app,6329546
gitlab-org/charts/auto-deploy-app,11915984
gitlab-org/security-products/container-scanning,11428501
gitlab-org/security-products/license-management,6130122
gitlab-org/security-products/gemnasium-db,12006272
......@@ -122,7 +122,7 @@ gitlab-org/security-products/tests/java-maven,5457651
gitlab-org/security-products/tests/js-yarn,5456231
gitlab-org/cluster-integration/helm-install-image,9184510
gitlab-org/cluster-integration/auto-deploy-image,11688089
charts/components/gitlab-operator,7686095
gitlab-org/charts/components/gitlab-operator,14240586
gitlab-org/charts/gitlab,3828396
gitlab-org/fulfillment,10619765