In Occam, storage is handled first and foremost by the local driver. This driver, located in storage/plugins path, simply keeps track of the object data in separate directories. The repository directory keeps track of the versioned content and the resource directory coordinates with the ResourceManager and the various resource plugins. The cache directory has checkouts of the versioned data which can be destroyed and recreated using the repository itself. These paths are usually mounted into virtual machines when objects are executed. The builds path is similarly for the content derived from building the object and this is what is mounted in when a built object is requested or executed.
For external repositories, we can co-opt the interface used for local storage and implement routines that makes sense for that repository based on its capabilities. There are a few such operations.
A dictionary accountInfo may be included when the backend has specific authorization that is attached to an Account on the system. For some backends, including the generic local storage backend, this is empty. For backends such as IPFS, it may be empty because it is a global storage system. Dropbox, for instance, includes an access key that is attached to the Account that is issuing an Occam command. In this case, accountInfo can distinguish different accounts and allow each person to attach their own Dropbox accounts.
push: (uuid, path, revision, accountInfo) Stores the contents at path that represents an object with a uuid and some revision hash. This returns the information required to retrieve that stored data.
pull: (uuid, storageHash, accountInfo) Retrieves the data and places it in local storage based on the information (built from a previous push call) and the object's uuid.
purge: (uuid, storageHash, accountInfo) Deletes the content previously stored via a push from the storage system.
clone: (storageHash, accountInfo, path) Creates a clone (git) of a stored object repository. This is separate from pull since it may make better decisions about pulling a subset of the data.
discover: (uuid, accountInfo) Returns whether or not the requested data exists on the storage system.
We can create an IPFS backed system by implementing push and pull from the basic IPFS command line system. We can also build off of its ability to mount its namespace to a traditional directory and allow content to be pulled as needed.
Dropbox is a more traditional system that allows for files to be synchronized. This, however, is authenticated via an access token. This means that the push and pull mechanisms require extra context attached to an Account. This context (access key) can be passed into the storage system (or None if it isn't required, as in IPFS which is a public global store)