Data acquisition flow and dataset structure
Problem definition:
Currently, in the process of data acquisition, Quantify Scheduler returns a dataset (v1) in the format:
ch_0 = [0.0, 0.2, 0.4, 0.6, 0.8]
data_vars = dict(
ch_0=(["acq_index_ch_0"], ch_0),
)
coords = dict(
amp=(["acq_index_ch_0"], amps),
acq_index_ch_0=range(len(amps)),
)
dataset = xarray.Dataset(
data_vars=data_vars,
coords=coords,
)
dataset
This format is not standardized for all current schedules, and in particular does not support advanced schedules with different numbers of operations on different qubits, or operations on non-consecutive qubits.
To resolve this issue, the data acquisition process was modified and at the moment the Instrument coordinator composes a dataset from bytes provided by the hardware in a structured format based on qubits instead of physical parameters and looks like:
{q0:(["repetition", "aqc_index"]:[[100x3]])}
{q0:(["repetition", "aqc_index"]:[[100x1]])}
To support backward compatibility, the structured dataset is converted into the old format supported by Measurement Control and Analysis within the Instrument Coordinator. Current implementation is temporary and needs to be improved and has a number of issues:
Proposed discussions:
Quantify Scheduler should support both basic and advanced schedules as well as QCodes settable and gettable, and don't ruin analysis and plotmon. There are few approaches
-
Move conversion code to Measurement Control and apply it selectively only to those datasets that can be converted without data loss. It would support analysis and plotmon for simple schedules, and analysis of advanced ones would be resolved later
-
Develop a third, unified dataset format for measurement control, which will include both previous formats and will be fully backward compatible with analysis and plotmon