Standard

GigaChat Family : Efficient Russian Language Modeling Through Mixture of Experts Architecture. / GigaChat team.

2025.

Research output: Working paperPreprint

Harvard

APA

Vancouver

Author

BibTeX

@techreport{9766060ce5514defae08580852c7427b,
title = "GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture",
abstract = " Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source (https://huggingface.co/ai-sage), aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language. ",
keywords = "cs.CL, cs.AI",
author = "{GigaChat team} and Mamedov Valentin and Evgenii Kosarev and Gregory Leleytner and Ilya Shchuckin and Valeriy Berezovskiy and Daniil Smirnov and Dmitry Kozlov and Sergei Averkiev and Lukyanenko Ivan and Aleksandr Proshunin and Ainur Israfilova and Ivan Baskov and Artem Chervyakov and Emil Shakirov and Mikhail Kolesov and Daria Khomich and Darya Latortseva and Sergei Porkhun and Yury Fedorov and Oleg Kutuzov and Polina Kudriavtseva and Sofiia Soldatova and Kolodin Egor and Stanislav Pyatkin and Dzmitry Menshykh and Grafov Sergei and Eldar Damirov and Karlov Vladimir and Ruslan Gaitukiev and Arkadiy Shatenov and Alena Fenogenova and Nikita Savushkin and Fedor Minkin",
note = "ACL-2025 System Demo",
year = "2025",
month = jun,
day = "11",
doi = "10.48550/arXiv.2506.09440",
language = "English",
type = "WorkingPaper",

}

RIS

TY - UNPB

T1 - GigaChat Family

T2 - Efficient Russian Language Modeling Through Mixture of Experts Architecture

AU - GigaChat team

AU - Valentin, Mamedov

AU - Kosarev, Evgenii

AU - Leleytner, Gregory

AU - Shchuckin, Ilya

AU - Berezovskiy, Valeriy

AU - Smirnov, Daniil

AU - Kozlov, Dmitry

AU - Averkiev, Sergei

AU - Ivan, Lukyanenko

AU - Proshunin, Aleksandr

AU - Israfilova, Ainur

AU - Baskov, Ivan

AU - Chervyakov, Artem

AU - Shakirov, Emil

AU - Kolesov, Mikhail

AU - Khomich, Daria

AU - Latortseva, Darya

AU - Porkhun, Sergei

AU - Fedorov, Yury

AU - Kutuzov, Oleg

AU - Kudriavtseva, Polina

AU - Soldatova, Sofiia

AU - Egor, Kolodin

AU - Pyatkin, Stanislav

AU - Menshykh, Dzmitry

AU - Sergei, Grafov

AU - Damirov, Eldar

AU - Vladimir, Karlov

AU - Gaitukiev, Ruslan

AU - Shatenov, Arkadiy

AU - Fenogenova, Alena

AU - Savushkin, Nikita

AU - Minkin, Fedor

N1 - ACL-2025 System Demo

PY - 2025/6/11

Y1 - 2025/6/11

N2 - Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source (https://huggingface.co/ai-sage), aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.

AB - Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source (https://huggingface.co/ai-sage), aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.

KW - cs.CL

KW - cs.AI

U2 - 10.48550/arXiv.2506.09440

DO - 10.48550/arXiv.2506.09440

M3 - Preprint

BT - GigaChat Family

ER -

ID: 138421031