... | ... | @@ -2,7 +2,7 @@ |
|
|
|
|
|
In order to check if our causal algorithms work correctly, it can be helpful to **test them on (possibly synthetic) data from a known data generating process**. If the known causal mechanisms (e.g. the causal graph or specific treatment effects) are correctly recovered, this **lends credibility** to the considered causal algorithm.
|
|
|
|
|
|
The more the benchmark dataset is similar (w.r.t. number of variables, samples, confounders, ...) to the data that we actually want to analyze, the more likely it is for a successful causal algorithm to perform equally well in our actual task. Therefore, we will use this page to **collect datasets from various domains**, such that we can draw **suitable benchmark datasets** from this knowledge base.
|
|
|
The more the benchmark dataset is similar (w.r.t. number of variables, samples, confounders, ...) to the data that we actually want to analyze, the more likely it is for a successful causal algorithm to perform equally well in our actual task. Therefore, we will use this page to **collect datasets from various domains**, such that we can draw **suitable benchmark datasets** from this knowledge base. Additionally, we collect our custom-generated **synthetic datasets** [here](https://gitlab.com/causal-inference/working-group/-/tree/main/data) on our repository.
|
|
|
|
|
|
| Link | Authors | Keywords | Contributor | Description |
|
|
|
|:------------------:|:--------:|:--------:|:------------:|:-----------:|
|
... | ... | |