RPM deployment creates too many sandboxes
There's an optimisation that can be made to to the RPM deployment mechanism to significantly speed up the time taken to copy RPMs to the deployment area, especially for larger builds with hundreds of RPMs
Currently (in ybd/ybd.py) the deploy_rpm() function reads its dstfilename from an rpm_deployment_filename() which sandboxes a call to rpm to read the various meta data from the original source rpm (which has a vastly different name), specifically "%{{name}}-%{{version}}-%{{release}}.%{{arch}}"
This sandbox is created, and torn down, for each RPM (since it happens in the for pkgname in get_generated_package_names
for loop in deploy_rpm), which is very costly.
I conducted a quick test on my machine, disabling the actual copy (which should add no more than 10 seconds on fast IO, though is probably really be in the 1-2 second range)
Normal conditions - with RPM call
Start of RPM deploy: 00:00:24 End of RPM deploy: 00:01:58 Total: 00:01:34
Test conditions - just run "echo hello" command in the sandbox, spin up/tear down only
Start of RPM deploy: 00:00:25 End of RPM deploy: 00:01:44 Total: 00:01:19
As you can see the actual RPM commands add very little, a different of only 15 seconds, the majority of the time is taken to spin up/tear down the sandboxes for each RPM.
So a good optimisation here would be to parse the meta-data for all the RPMs that have been passed to deploy_rpm in one sandboxed environment, and then do the copy on this. This should result in a very nice speedup to the process.