Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Codabench: Flexible, Easy-to-Use and Reproducible Meta-Benchmark Platform

Abstract : Obtaining standardized crowdsourced benchmark of computational methods is a major issue in data science communities. Dedicated frameworks enabling fair benchmarking in a unified environment are yet to be developed. Here we introduce Codabench, an open-source, community-driven platform for benchmarking algorithms or software agents versus datasets or tasks. A public instance of Codabench ( is open to everyone, free of charge, and allows benchmark organizers to compare fairly submissions, under the same setting (software, hardware, data, algorithms), with custom protocols and data formats. Codabench has unique features facilitating the organization of benchmarks flexibly, easily and reproducibly, such as the possibility of re-using templates of benchmarks, and supplying compute resources on-demand. Codabench has been used internally and externally on various applications, receiving more than 130 users and 2500 submissions. As illustrative use cases, we introduce 4 diverse benchmarks covering Graph Machine Learning, Cancer Heterogeneity, Clinical Diagnosis and Reinforcement Learning.
Document type :
Preprints, Working Papers, ...
Complete list of metadata
Contributor : Zhen Xu Connect in order to contact the contributor
Submitted on : Monday, June 27, 2022 - 9:22:07 AM
Last modification on : Tuesday, September 13, 2022 - 3:38:09 PM


Files produced by the author(s)


  • HAL Id : hal-03374222, version 4


Zhen Xu, Sergio Escalera, Adrien Pavao, Magali Richard, Wei-Wei Tu, et al.. Codabench: Flexible, Easy-to-Use and Reproducible Meta-Benchmark Platform. 2022. ⟨hal-03374222v4⟩



Record views


Files downloads