Результаты исследований: Научные публикации в периодических изданиях › Обзорная статья › Рецензирование
Tutorial: assessing metagenomics software with the CAMI benchmarking toolkit. / Meyer, Fernando; Lesker, Till-Robin; Koslicki, David; Fritz, Adrian; Гуревич, Алексей Александрович; Darling, Aaron E.; Sczyrba, Alexander; Bremges, Andreas; McHardy, Alice C.
в: Nature Protocols, Том 16, № 4, 04.2021, стр. 1785-1801.Результаты исследований: Научные публикации в периодических изданиях › Обзорная статья › Рецензирование
}
TY - JOUR
T1 - Tutorial: assessing metagenomics software with the CAMI benchmarking toolkit
AU - Meyer, Fernando
AU - Lesker, Till-Robin
AU - Koslicki, David
AU - Fritz, Adrian
AU - Гуревич, Алексей Александрович
AU - Darling, Aaron E.
AU - Sczyrba, Alexander
AU - Bremges, Andreas
AU - McHardy, Alice C.
PY - 2021/4
Y1 - 2021/4
N2 - Computational methods are key in microbiome research, and obtaining a quantitative and unbiased performance estimate is important for method developers and applied researchers. For meaningful comparisons between methods, to identify best practices and common use cases, and to reduce overhead in benchmarking, it is necessary to have standardized datasets, procedures and metrics for evaluation. In this tutorial, we describe emerging standards in computational meta-omics benchmarking derived and agreed upon by a larger community of researchers. Specifically, we outline recent efforts by the Critical Assessment of Metagenome Interpretation (CAMI) initiative, which supplies method developers and applied researchers with exhaustive quantitative data about software performance in realistic scenarios and organizes community-driven benchmarking challenges. We explain the most relevant evaluation metrics for assessing metagenome assembly, binning and profiling results, and provide step-by-step instructions on how to generate them. The instructions use simulated mouse gut metagenome data released in preparation for the second round of CAMI challenges and showcase the use of a repository of tool results for CAMI datasets. This tutorial will serve as a reference for the community and facilitate informative and reproducible benchmarking in microbiome research.
AB - Computational methods are key in microbiome research, and obtaining a quantitative and unbiased performance estimate is important for method developers and applied researchers. For meaningful comparisons between methods, to identify best practices and common use cases, and to reduce overhead in benchmarking, it is necessary to have standardized datasets, procedures and metrics for evaluation. In this tutorial, we describe emerging standards in computational meta-omics benchmarking derived and agreed upon by a larger community of researchers. Specifically, we outline recent efforts by the Critical Assessment of Metagenome Interpretation (CAMI) initiative, which supplies method developers and applied researchers with exhaustive quantitative data about software performance in realistic scenarios and organizes community-driven benchmarking challenges. We explain the most relevant evaluation metrics for assessing metagenome assembly, binning and profiling results, and provide step-by-step instructions on how to generate them. The instructions use simulated mouse gut metagenome data released in preparation for the second round of CAMI challenges and showcase the use of a repository of tool results for CAMI datasets. This tutorial will serve as a reference for the community and facilitate informative and reproducible benchmarking in microbiome research.
UR - http://www.scopus.com/inward/record.url?scp=85101805348&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/af712119-d0ae-3f2a-bcb1-5010080a22bd/
U2 - 10.1038/s41596-020-00480-3
DO - 10.1038/s41596-020-00480-3
M3 - Review article
VL - 16
SP - 1785
EP - 1801
JO - Nature Protocols
JF - Nature Protocols
SN - 1754-2189
IS - 4
ER -
ID: 74772126