Skip to content
Update Data preparation workflow authored by Stephen Parsons's avatar Stephen Parsons
...@@ -58,7 +58,7 @@ Record the x and y offsets as well as the width and height of the bounding box y ...@@ -58,7 +58,7 @@ Record the x and y offsets as well as the width and height of the bounding box y
The goal of this step is to get a 16-bit .tif image stack. For benchtop sources, this is probably already done as part of the reconstruction process. For synchrotron scans, this step may be necessary. For example in the 2019 Diamond Light Source scans, the reconstruction output 32-bit float .hdf files from which .tif slices need to be extracted. Often, such as with the fragments scanned in that session, there is a separate .hdf for each "slab". Slices should be extracted from each, and then merged later. The extraction for multiple .hdf files can be done in one command. The goal of this step is to get a 16-bit .tif image stack. For benchtop sources, this is probably already done as part of the reconstruction process. For synchrotron scans, this step may be necessary. For example in the 2019 Diamond Light Source scans, the reconstruction output 32-bit float .hdf files from which .tif slices need to be extracted. Often, such as with the fragments scanned in that session, there is a separate .hdf for each "slab". Slices should be extracted from each, and then merged later. The extraction for multiple .hdf files can be done in one command.
The range of values in the float .hdf is not the same as the 16-bit integer representation, so the values need to be stretched to \[0-65535\] during this process. Use `----auto-percentile-windowing` to do this automatically. The range of values in the float .hdf is not the same as the 16-bit integer representation, so the values need to be stretched to \[0-65535\] during this process. Use `--auto-percentile-windowing` to do this automatically.
For particularly large datasets (such as those split into slabs) the entire dataset may not fit on your desktop machine. In these cases it may then be more efficient to crop the source files on the source server before transferring them to your desktop for tasks requiring a graphical interface/user intervention. This allows for slabs to be processed in parallel up until volume packaging and greatly reduces the size of the initial data transfer. For particularly large datasets (such as those split into slabs) the entire dataset may not fit on your desktop machine. In these cases it may then be more efficient to crop the source files on the source server before transferring them to your desktop for tasks requiring a graphical interface/user intervention. This allows for slabs to be processed in parallel up until volume packaging and greatly reduces the size of the initial data transfer.
... ...
......