Standard

Human-Annotated NER Dataset for the Kyrgyz Language. / Turatali, Timur; Алексеев, Антон Михайлович; Jumalieva, Gulira; Kabaeva, Gulnara; Николенко, Сергей Игоревич.

2025 10th International Conference on Computer Science and Engineering (UBMK). Institute of Electrical and Electronics Engineers Inc., 2025. p. 1607-1612.

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Harvard

Turatali, T, Алексеев, АМ, Jumalieva, G, Kabaeva, G & Николенко, СИ 2025, Human-Annotated NER Dataset for the Kyrgyz Language. in 2025 10th International Conference on Computer Science and Engineering (UBMK). Institute of Electrical and Electronics Engineers Inc., pp. 1607-1612, 10th International Conference on Computer Science and Engineering (UBMK), Istanbul, Turkiye, 2025, Стамбул, Turkey, 17/09/25. https://doi.org/10.1109/ubmk67458.2025.11206879

APA

Turatali, T., Алексеев, А. М., Jumalieva, G., Kabaeva, G., & Николенко, С. И. (2025). Human-Annotated NER Dataset for the Kyrgyz Language. In 2025 10th International Conference on Computer Science and Engineering (UBMK) (pp. 1607-1612). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ubmk67458.2025.11206879

Vancouver

Turatali T, Алексеев АМ, Jumalieva G, Kabaeva G, Николенко СИ. Human-Annotated NER Dataset for the Kyrgyz Language. In 2025 10th International Conference on Computer Science and Engineering (UBMK). Institute of Electrical and Electronics Engineers Inc. 2025. p. 1607-1612 https://doi.org/10.1109/ubmk67458.2025.11206879

Author

Turatali, Timur ; Алексеев, Антон Михайлович ; Jumalieva, Gulira ; Kabaeva, Gulnara ; Николенко, Сергей Игоревич. / Human-Annotated NER Dataset for the Kyrgyz Language. 2025 10th International Conference on Computer Science and Engineering (UBMK). Institute of Electrical and Electronics Engineers Inc., 2025. pp. 1607-1612

BibTeX

@inproceedings{c85a9835377d4c189d2ea4c4dfd3bb44,
title = "Human-Annotated NER Dataset for the Kyrgyz Language",
abstract = "We introduce KyrgyzNER, the first manually annotated named entity recognition dataset for the Kyrgyz language. Comprising 1,499 news articles from the 24.KG news portal, the dataset contains 10,900 sentences and 39,075 entity mentions across 27 named entity classes. We show our annotation scheme, discuss the challenges encountered in the annotation process, and present the descriptive statistics. We also evaluate several named entity recognition models, including traditional sequence labeling approaches based on conditional random fields and state-of-the-art multilingual transformer-based models fine-tuned on our dataset. While all models show difficulties with rare entity categories, models such as the multilingual RoBERTa variant pretrained on a large corpus across many languages achieve a promising balance between precision and recall. These findings emphasize both the challenges and opportunities of using multilingual pretrained models for processing languages with limited resources. Although the multilingual RoBERTa model performed best, other multilingual models yielded comparable results. This suggests that future work exploring more granular annotation schemes may offer deeper insights for Kyrgyz language processing pipelines evaluation.",
author = "Timur Turatali and Алексеев, {Антон Михайлович} and Gulira Jumalieva and Gulnara Kabaeva and Николенко, {Сергей Игоревич}",
year = "2025",
month = oct,
day = "24",
doi = "10.1109/ubmk67458.2025.11206879",
language = "English",
pages = "1607--1612",
booktitle = "2025 10th International Conference on Computer Science and Engineering (UBMK)",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",
note = "null ; Conference date: 17-09-2025 Through 19-09-2025",
url = "https://ubmk.org.tr/en/",

}

RIS

TY - GEN

T1 - Human-Annotated NER Dataset for the Kyrgyz Language

AU - Turatali, Timur

AU - Алексеев, Антон Михайлович

AU - Jumalieva, Gulira

AU - Kabaeva, Gulnara

AU - Николенко, Сергей Игоревич

PY - 2025/10/24

Y1 - 2025/10/24

N2 - We introduce KyrgyzNER, the first manually annotated named entity recognition dataset for the Kyrgyz language. Comprising 1,499 news articles from the 24.KG news portal, the dataset contains 10,900 sentences and 39,075 entity mentions across 27 named entity classes. We show our annotation scheme, discuss the challenges encountered in the annotation process, and present the descriptive statistics. We also evaluate several named entity recognition models, including traditional sequence labeling approaches based on conditional random fields and state-of-the-art multilingual transformer-based models fine-tuned on our dataset. While all models show difficulties with rare entity categories, models such as the multilingual RoBERTa variant pretrained on a large corpus across many languages achieve a promising balance between precision and recall. These findings emphasize both the challenges and opportunities of using multilingual pretrained models for processing languages with limited resources. Although the multilingual RoBERTa model performed best, other multilingual models yielded comparable results. This suggests that future work exploring more granular annotation schemes may offer deeper insights for Kyrgyz language processing pipelines evaluation.

AB - We introduce KyrgyzNER, the first manually annotated named entity recognition dataset for the Kyrgyz language. Comprising 1,499 news articles from the 24.KG news portal, the dataset contains 10,900 sentences and 39,075 entity mentions across 27 named entity classes. We show our annotation scheme, discuss the challenges encountered in the annotation process, and present the descriptive statistics. We also evaluate several named entity recognition models, including traditional sequence labeling approaches based on conditional random fields and state-of-the-art multilingual transformer-based models fine-tuned on our dataset. While all models show difficulties with rare entity categories, models such as the multilingual RoBERTa variant pretrained on a large corpus across many languages achieve a promising balance between precision and recall. These findings emphasize both the challenges and opportunities of using multilingual pretrained models for processing languages with limited resources. Although the multilingual RoBERTa model performed best, other multilingual models yielded comparable results. This suggests that future work exploring more granular annotation schemes may offer deeper insights for Kyrgyz language processing pipelines evaluation.

UR - https://arxiv.org/pdf/2509.19109

UR - https://www.mendeley.com/catalogue/312fe3a1-45d3-335a-b285-48982cd2de3e/

U2 - 10.1109/ubmk67458.2025.11206879

DO - 10.1109/ubmk67458.2025.11206879

M3 - Conference contribution

SP - 1607

EP - 1612

BT - 2025 10th International Conference on Computer Science and Engineering (UBMK)

PB - Institute of Electrical and Electronics Engineers Inc.

Y2 - 17 September 2025 through 19 September 2025

ER -

ID: 143021194