Skip to content
Snippets Groups Projects
Commit 98c15463 authored by Tristan Van Berkom's avatar Tristan Van Berkom
Browse files

_scheduler/queues: Don't call update state outside of error handling harness

Commit 3fa79d8d, part of an initiative for caching of the failed builds,
introduced a call to Element._update_state() after a job completes and before
entering the error handling harness intended for handling plugin raised errors.

Element._update_state() can result in triggering plugin code to
run, so this is incorrect, and causes raised errors to crash BuildStream
if they happen here.

After analyzing the code, it appears that this additional call to
Element._update_state() is unneeded, and was only added because the
state needs to be updated for a failure as well as a success.

Instead, we now have the BuildQueue call Element._assemble_done()
unconditionally, regardless of whether the build was successful
or not, which has the same effect and also reads better.

In addition, added a FIXME comment that we are still conditionally
updating the artifact cache size from BuildQueue.done() only
if the build is successful, which is incorrect because failed builds
also increase the local artifact cache size - to fix this we need
to communicate the added artifact size through Element._assemble()
regardless of whether the build succeeded or failed.
parent f7581026
No related branches found
No related tags found
Loading
Pipeline #39001733 passed
......@@ -106,10 +106,16 @@ class BuildQueue(Queue):
def done(self, job, element, result, success):
if success:
# Inform element in main process that assembly is done
element._assemble_done()
# Inform element in main process that assembly is done
element._assemble_done()
# This has to be done after _assemble_done, such that the
# element may register its cache key as required
# This has to be done after _assemble_done, such that the
# element may register its cache key as required
#
# FIXME: Element._assemble() does not report both the failure state and the
# size of the newly cached failed artifact, so we can only adjust the
# artifact cache size for a successful build even though we know a
# failed build also grows the artifact cache size.
#
if success:
self._check_cache_size(job, element, result)
......@@ -292,7 +292,6 @@ class Queue():
# See the Job object for an explanation of the call signature
#
def _job_done(self, job, element, success, result):
element._update_state()
# Update values that need to be synchronized in the main task
# before calling any queue implementation
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment