Skip to content

[BB-6197] Automate sprint completion

Demid requested to merge 0x29a/bb6197/automate-sprint-completion into master

Description

Automates sprint completion for every cell by scheduling parallel complete_sprint_task tasks.

Testing instructions

  1. git checkout 0x29a/bb6197/automate-sprint-completion.
  2. Modify .env to enable necessary features:
    echo "FEATURE_AUTOMATED_SPRINT_COMPLETION=true\nFEATURE_SPRINT_AUTOMATION=true" >> .env
  3. Start sprintcraft:
    docker-compose -f local.yml up -d
  4. Enter django container's shell:
    docker-compose -f local.yml exec django sh
  5. Create admin user:
    python manage.py createsuperuser
  6. Enter django shell:
    python manage.py shell
  7. Ensure that DEBUG is True:
    In [1]: from django.conf import settings; settings.DEBUG
    Out[1]: True
  8. Run complete_all_sprints_task:
    from sprintcraft.dashboard.tasks import complete_all_sprints_task
    complete_all_sprints_task.delay()
  9. Wait a few minutes.
  10. Check http://0.0.0.0:8000/admin/django_celery_beat/periodictask/. You should see [ASYNC] Complete all sprints: every second there, with START DATETIME equal to July 12, 2022, midnight.

Other information

Check if we could perform all calculations for each item of the pipeline before sending any data. This way, if a part of the pipeline fails, then we will not need to deal with resolving missing partial dependencies.

Looks like there is not much we can do? The complete_sprint_task function is already divided by the debug gate, and all computations (except those in the Celery tasks) are done before sending data.

However, I don't see such gate for upload_spillovers_task and upload_commitments_task tasks.

Automate completing sprints either for each cell. Determine if it should be a serial or parallel process (for cells) - check things like e.g. "Can we run into race conditions in Google Sheets/Jira?".

I couldn't find any evidence that there may be race conditions. We often complete sprints nearly at the same time, and I don't remember issues caused by that.

This step will require some investigation. If a part of the pipeline is not crucial for ending the sprint (e.g. "Create role tickets", "Trigger the new sprint webhooks"), could we send errors to Sentry, but let the pipeline pass gracefully? Should we separate some computations from the main pipeline to be able to run the failed command with the management command? This does not need to be implemented as a part of this issue.

This may make the code a bit messy, but what if we add steps keyword argument to complete_sprint_task, and then, do something like:

for issue in next_sprint_issues:
    UNFLAG in steps and unflag_issue(conn, issue)

Then, we can write a management command, which will be scheduling complete_sprint_task, but passing only steps we need. Just thinking aloud.

Resolves #51 (closed) and #17 (closed).

Edited by Demid

Merge request reports