Research output: Working paper › Preprint
GigaChat Family : Efficient Russian Language Modeling Through Mixture of Experts Architecture. / GigaChat team.
2025.Research output: Working paper › Preprint
}
TY - UNPB
T1 - GigaChat Family
T2 - Efficient Russian Language Modeling Through Mixture of Experts Architecture
AU - GigaChat team
AU - Valentin, Mamedov
AU - Kosarev, Evgenii
AU - Leleytner, Gregory
AU - Shchuckin, Ilya
AU - Berezovskiy, Valeriy
AU - Smirnov, Daniil
AU - Kozlov, Dmitry
AU - Averkiev, Sergei
AU - Ivan, Lukyanenko
AU - Proshunin, Aleksandr
AU - Israfilova, Ainur
AU - Baskov, Ivan
AU - Chervyakov, Artem
AU - Shakirov, Emil
AU - Kolesov, Mikhail
AU - Khomich, Daria
AU - Latortseva, Darya
AU - Porkhun, Sergei
AU - Fedorov, Yury
AU - Kutuzov, Oleg
AU - Kudriavtseva, Polina
AU - Soldatova, Sofiia
AU - Egor, Kolodin
AU - Pyatkin, Stanislav
AU - Menshykh, Dzmitry
AU - Sergei, Grafov
AU - Damirov, Eldar
AU - Vladimir, Karlov
AU - Gaitukiev, Ruslan
AU - Shatenov, Arkadiy
AU - Fenogenova, Alena
AU - Savushkin, Nikita
AU - Minkin, Fedor
N1 - ACL-2025 System Demo
PY - 2025/6/11
Y1 - 2025/6/11
N2 - Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source (https://huggingface.co/ai-sage), aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.
AB - Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source (https://huggingface.co/ai-sage), aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.
KW - cs.CL
KW - cs.AI
U2 - 10.48550/arXiv.2506.09440
DO - 10.48550/arXiv.2506.09440
M3 - Preprint
BT - GigaChat Family
ER -
ID: 138421031