Skip to content
Snippets Groups Projects
Commit 6deec9d3 authored by Renana Poranne's avatar Renana Poranne
Browse files

Merge branch 'update-readme-2022-06-05' into 'main'

corrected some typos + reformulation of README

See merge request !1
parents cd1842d6 d48e13f6
No related branches found
No related tags found
1 merge request!1corrected some typos + reformulation of README
......@@ -6,10 +6,10 @@ Interpretable deep learning was used to identify structure-property relationship
</p>
## preparations
1. Download repo.
2. Download dataset (`csv` + `xyz`s) from [COMPAS](https://gitlab.com/porannegroup/compas)
3. Update `csv` + `xyz`s paths in `utiles/args.py`
4. Install conda environment according to the instruction below
1. Download repository
2. Download dataset (`csv` and `xyzs`) from [COMPAS](https://gitlab.com/porannegroup/compas)
3. Update `csv` and `xyzs` paths in `utiles/args.py`
4. Install conda environment according to the instructions below
## Dependencies
```
......@@ -23,35 +23,32 @@ pip install requests
## Usage
### Training
The training script will train the model using the train and validation datasets.
When training finish will run evaluation on the test set and will print the results.
The saved model and the some plots will be save in `summary/{exp_name}`.
During training, the script `train.py` trains the model using the train and validation datasets. After the training is done, the script runs an evaluation on the test set and prints the results.
The saved model and the plots of the prediction vs the target values are saved in `summary/{exp_name}`. The tensorboard log files are also saved in this directory.
```
python train.py --name 'exp_name' --target_features 'GAP_eV, Erel_eV'
```
target_features should be separate with `,`.
Full list of possible arguments can be seen in `utiles/args.py`.
target_features should be separate with '`,`'.
A full list of possible arguments can be found in `utiles/args.py`.
### Evaluation
Run only the evaluation on trained model.
To only run the evaluation on a previously trained model, run the following command.
```
python eval.py --name 'exp_name'
```
### Interpretability
Running the interpretability algorithm. Will save all the molecules in the dataset
with their GradRAM weights in the same directory as the logs and models.
To run the interpretability algorithm, use `interpretability_from_names.py`. This script saves all the molecules in the dataset with their GradRAM weights in the same directory as the models and logs.
```
python interpretability_save_all.py --name 'exp_name'
```
To run only on subset of molecules run `interpretability_from_names.py` and change the molecules name
list in line 27.
To run only on a subset of molecules, run `interpretability_from_names.py` and change the names of the molecules in the list in line 27.
## Repo structure
- `data` - Dataset and data related code.
- `se3_transformers` - SE3 model use as a predictor
- `utils` - helper function.
- `summary` - Logs, trained models and interpretability figures for each experiment.
- `se3_transformers` - SE3 model used as a predictor.
- `utils` - Helper functions.
- `summary` - Logs, trained models, plots, and interpretability figures for each experiment.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment