Adds API support for Project Deployment Frequency
Compare changes
Files
5- Amy Troschinetz authored
+ 7
− 0
@@ -31,6 +31,7 @@ class Deployment < ApplicationRecord
@@ -44,6 +45,12 @@ class Deployment < ApplicationRecord
Creates an API to access Deployment Frequency data at the Project level.
Migrations:
± bin/rake db:migrate:up VERSION=20201211042306
== 20201211042306 AddDeploymentsFinderByFinishedAtIndex: migrating ============
-- transaction_open?()
-> 0.0000s
-- index_exists?(:deployments, [:project_id, :finished_at], {:where=>"status = 2", :name=>"index_deployments_on_project_and_finished", :algorithm=>:concurrently})
-> 0.0097s
-- execute("SET statement_timeout TO 0")
-> 0.0001s
-- add_index(:deployments, [:project_id, :finished_at], {:where=>"status = 2", :name=>"index_deployments_on_project_and_finished", :algorithm=>:concurrently})
-> 0.0152s
-- execute("RESET ALL")
-> 0.0001s
== 20201211042306 AddDeploymentsFinderByFinishedAtIndex: migrated (0.0257s) ===
± bin/rake db:migrate:down VERSION=20201211042306
== 20201211042306 AddDeploymentsFinderByFinishedAtIndex: reverting ============
-- transaction_open?()
-> 0.0000s
-- index_exists?(:deployments, [:project_id, :finished_at], {:where=>"status = 2", :name=>"index_deployments_on_project_and_finished", :algorithm=>:concurrently})
-> 0.0074s
-- execute("SET statement_timeout TO 0")
-> 0.0001s
-- remove_index(:deployments, {:where=>"status = 2", :name=>"index_deployments_on_project_and_finished", :algorithm=>:concurrently, :column=>[:project_id, :finished_at]})
-> 0.0104s
-- execute("RESET ALL")
-> 0.0001s
== 20201211042306 AddDeploymentsFinderByFinishedAtIndex: reverted (0.0185s) ===
Queries:
SELECT
"deployments".*
FROM
"deployments"
INNER JOIN "environments" ON "environments"."id" = "deployments"."environment_id"
WHERE
"deployments"."project_id" = :project_id
AND (deployments.created_at >= :start_datetime)
AND (deployments.created_at < :end_datetime)
AND "deployments"."status" = 2
AND "environments"."name" = :environment_name;
Plan with execution:
Nested Loop (cost=1.00..55847.19 rows=918 width=139) (actual time=1.445..1111.047 rows=36331 loops=1)
Buffers: shared hit=662633 read=24746
I/O Timings: read=642.676
-> Index Scan using index_deployments_on_project_and_finished on public.deployments (cost=0.57..18593.04 rows=14577 width=139) (actual time=0.390..774.724 rows=140998 loops=1)
Index Cond: ((deployments.project_id = 7764) AND (deployments.finished_at >= (now() - '30 days'::interval)) AND (deployments.finished_at < now()))
Buffers: shared hit=96424 read=24745
I/O Timings: read=641.462
-> Index Scan using environments_pkey on public.environments (cost=0.43..2.56 rows=1 width=4) (actual time=0.002..0.002 rows=0 loops=140998)
Index Cond: (environments.id = deployments.environment_id)
Filter: ((environments.name)::text = 'production'::text)
Rows Removed by Filter: 1
Buffers: shared hit=566209 read=1
I/O Timings: read=1.214
Recommendations:
Query processes too much data to return a relatively small number of rows. – Reduce data cardinality as early as possible during the execution, using one or several of the following techniques: new indexes, partitioning, query rewriting, denormalization. See the visualization of the plan to understand which plan nodes are the main bottlenecks. Show details
Add LIMIT – The number of rows in the result set is too big. Limit number of rows. Show details
Specialized index needed – The index(es) currently used does not serve quite well for the needs of this query (notice Rows Removed by Filter: ..., meaning that the index fetched many non-target rows). Consider adding more specialized index(es). Show details
Summary:
Time: 1.115 s
- planning: 0.906 ms
- execution: 1.114 s
- I/O read: 642.676 ms
- I/O write: 0.000 ms
Shared buffers:
- hits: 662633 (~5.10 GiB) from the buffer pool
- reads: 24746 (~193.30 MiB) from the OS file cache, including disk I/O
- dirtied: 0
- writes: 0
Permalink: https://postgres.ai/console/gitlab/gitlab-production-tunnel/sessions/1515/commands/5068. (edited)