Fix TensorForcedEval in the case of the evaluator being copied.

Copying the evaluator ends up copying the temporary buffer, which means we can't actually deallocate/reallocate it, or it will lead to double-free or memory access issues. If the buffer is to be shared between evaluators, it must be a shared_ptr.

Existing usages assume the temporary buffer is populated after copying, so we can't just restrict the temporary buffer per instance.

Merge request reports

Loading