Projects with this topic
-
MultiNativQA is Multilingual Native question-answering (QA) dataset consisting of 64k QA pairs in seven extremely low to high resource languages, covering 18 different topics from nine different regions. Paper: https://arxiv.org/pdf/2407.09823. Project: https://nativqa.gitlab.io
Updated -
Benchmarks data processing tools
Updated -
-
Benchmarking framework for machine learning with fNIRS
Updated -
QAPerf is a tool for evaluating the performance of quality assessment (QA) methods. It automates the generation of standard reports, including correlation tables, leave-one-group-out correlation boxplots, and other benchmarking visualizations. Designed for researchers and practitioners, QAPerf streamlines the analysis of QA metrics, ensuring reliable and reproducible evaluations.
Updated -
A Live Evaluation of Computational Methods for Metagenome Investigation
Updated -
-
SecurityPerf is a tool designed for benchmarking production workloads. In doing so, it makes measuring the impact of security programs on production workloads easy.
Updated -
VTmark results, Ansible roles and other scripts.
Updated -
The project uses natural language processing to automatically assess causal diagrams filled out by students based on semantic distance from the model causal diagram provided.
Updated -
Modular toolbox for performance recording, benchmarking and visualization. https://kpouget_spice.gitlab.io/streaming_stats/
Updated