<!DOCTYPE article
PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20190208//EN"
       "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.4" xml:lang="en">
 <front>
  <journal-meta>
   <journal-id journal-id-type="publisher-id">Virtual Communication and Social Networks</journal-id>
   <journal-title-group>
    <journal-title xml:lang="en">Virtual Communication and Social Networks</journal-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Виртуальная коммуникация и социальные сети</trans-title>
    </trans-title-group>
   </journal-title-group>
   <issn publication-format="print">2782-4799</issn>
   <issn publication-format="online">2782-4802</issn>
  </journal-meta>
  <article-meta>
   <article-id pub-id-type="publisher-id">105565</article-id>
   <article-id pub-id-type="doi">10.21603/2782-4799-2025-4-4-344-352</article-id>
   <article-id pub-id-type="edn">bjzjcz</article-id>
   <article-categories>
    <subj-group subj-group-type="toc-heading" xml:lang="ru">
     <subject>Коммуникативистика и когнитивные науки</subject>
    </subj-group>
    <subj-group subj-group-type="toc-heading" xml:lang="en">
     <subject>Communication Studies and Cognitive Sciences</subject>
    </subj-group>
    <subj-group>
     <subject>Коммуникативистика и когнитивные науки</subject>
    </subj-group>
   </article-categories>
   <title-group>
    <article-title xml:lang="en">Impact of Parameter-To-Data Ratio on LLM Fine-Tuning in Russian Text Classification Tasks</article-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Влияние соотношения параметров и данных  на дообучение LLM в задачах классификации русских текстов</trans-title>
    </trans-title-group>
   </title-group>
   <contrib-group content-type="authors">
    <contrib contrib-type="author">
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Шамигов</surname>
       <given-names>Федор Федорович</given-names>
      </name>
      <name xml:lang="en">
       <surname>Shamigov</surname>
       <given-names>Fedor Fedorovich</given-names>
      </name>
     </name-alternatives>
     <email>fshamigov@mail.ru</email>
     <xref ref-type="aff" rid="aff-1"/>
    </contrib>
   </contrib-group>
   <aff-alternatives id="aff-1">
    <aff>
     <institution xml:lang="ru">Национальный исследовательский Томский государственный университет</institution>
    </aff>
    <aff>
     <institution xml:lang="en">National Research Tomsk State University</institution>
    </aff>
   </aff-alternatives>
   <pub-date publication-format="print" date-type="pub" iso-8601-date="2025-12-22T05:05:53+03:00">
    <day>22</day>
    <month>12</month>
    <year>2025</year>
   </pub-date>
   <pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-12-22T05:05:53+03:00">
    <day>22</day>
    <month>12</month>
    <year>2025</year>
   </pub-date>
   <volume>4</volume>
   <issue>4</issue>
   <fpage>344</fpage>
   <lpage>352</lpage>
   <history>
    <date date-type="received" iso-8601-date="2025-10-20T00:00:00+03:00">
     <day>20</day>
     <month>10</month>
     <year>2025</year>
    </date>
    <date date-type="accepted" iso-8601-date="2025-11-11T00:00:00+03:00">
     <day>11</day>
     <month>11</month>
     <year>2025</year>
    </date>
   </history>
   <self-uri xlink:href="https://jsocnet.ru/en/nauka/article/105565/view">https://jsocnet.ru/en/nauka/article/105565/view</self-uri>
   <abstract xml:lang="ru">
    <p>Статья посвящена оптимизации дообучения (fine-tuning) больших языковых моделей (LLM) для задач классификации текстов на русском языке в условиях ограниченных вычислительных ресурсов. Предлагаемый метод основан на балансе между размером модели (числом параметров) и объемом обучающих данных: меньшая модель дообучается на большем датасете и сравнивается с большей моделью, дообученной на меньшем датасете. Цель – установить влияние соотношения параметров моделей и данных для дообучения на качество классификации текстов большими языковыми моделями. Выдвигается гипотеза о том, что «слабая» модель, дообученная на большем объеме данных, может показать близкое или более высокое качество классификации в сравнении с «сильной» моделью, дообученной на меньшем количестве данных. Актуальность исследования обусловлена необходимостью адаптации LLM к русско­язычным данным, где увеличение объема датасета может компенсировать меньший размер модели. Гипотеза проверялась на трех видах классификации: классификация тональности отзывов на фильмы, классификация тональности отзывов на сервисы и классификация новостей по топикам. Эксперименты проводились с использованием мультиязычных моделей: XLM-RoBERTa-comet-small (107 млн параметров) – «слабая» модель и XLM-RoBERTa-base (278 млн параметров) – «сильная» модель, на русскоязычных датасетах. Меньшая модель дообучалась на бóльших объемах данных (пропорционально разнице в параметрах), бóльшая – на меньших. Сделан вывод о том, что «слабая» модель стабильно превосходит или достигает сопоставимых метрик по сравнению с «сильной» моделью, при этом затрачивая в 2–3 раза меньше вычислительных ресурсов (FLOPs), что демонстрирует практическую ценность подхода для энерго­эффективного дообучения в русскоязычном контексте.</p>
   </abstract>
   <trans-abstract xml:lang="en">
    <p>This paper addresses the optimization of fine-tuning large language models (LLMs) for Russian-language text classification under constrained computational resources. The proposed approach hinges on balancing the model size (i.e., number of parameters) against the volume of training data: a smaller model is fine-tuned on a larger dataset and compared against a larger model fine-tuned on a smaller dataset. The aim was to establish the impact of different ratios of model parameters and data for further training on the quality of text classification by large language models. We hypothesized that a weaker (i.e., smaller) model trained on more data could achieve classification performance comparable to or even surpassing that of a stronger (i.e., larger) model trained on less data. This hypothesis was motivated by the need to adapt LLMs to Russian-language tasks, where increased dataset size may compensate for reduced model capacity. The hypothesis was evaluated across three classification tasks: sentiment analysis of movie reviews, sentiment analysis of service reviews, and topic classification of news articles. The experiments were conducted on Russian-language datasets and employed the multilingual models XLM-RoBERTa-comet-small (107M parameters) for the weaker model and XLM-RoBERTa-base (278M parameters) for the stronger model. The smaller model was fine-tuned on proportionally larger datasets (scaled according to the parameter count difference) while the larger model used correspondingly smaller datasets. The weaker model consistently matched or exceeded the performance of the stronger model while requiring 2–3 times fewer computational resources (measured in FLOPs). The result highlights the practical value of this approach for energy-efficient fine-tuning &#13;
in Russian-language settings.</p>
   </trans-abstract>
   <kwd-group xml:lang="ru">
    <kwd>LLM</kwd>
    <kwd>fine-tuning</kwd>
    <kwd>XLM-RoBERTa</kwd>
    <kwd>русскоязычные датасеты</kwd>
    <kwd>классификация</kwd>
    <kwd>тональность</kwd>
    <kwd>топики</kwd>
   </kwd-group>
   <kwd-group xml:lang="en">
    <kwd>LLM</kwd>
    <kwd>fine-tuning</kwd>
    <kwd>XLM-RoBERTa</kwd>
    <kwd>Russian-language datasets</kwd>
    <kwd>text classification</kwd>
    <kwd>sentiment</kwd>
    <kwd>topic classification</kwd>
   </kwd-group>
  </article-meta>
 </front>
 <body>
  <p></p>
 </body>
 <back>
  <ref-list>
   <ref id="B1">
    <label>1.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Гальцева Т. В., Нестеров С. А. Классификация и определение тональности текстов, публикуемых в сети Интернет. Системный анализ в проектировании и управлении: XXVII Междунар. науч.-практ. конф. (Санкт-Петербург, 13–14 октября 2023 г.) СПб.: ПОЛИТЕХ-ПРЕСС, 2024. Ч. 2. С. 491–498. https://doi.org/10.18720/SPBPU/2/id24-202</mixed-citation>
     <mixed-citation xml:lang="en">Galtseva T. V., Nesterov S. A. Classification and sentiment analysis of texts published on the Internet. System analysis in design and management: Proc. XXVII Intern. Sci.-Prac. Conf., St. Petersburg, 13–14 Oct 2023. St. Petersburg: POLITEKH-PRESS, 2024, pt. 2, 491–498. (In Russ.) https://doi.org/10.18720/SPBPU/2/id24-202</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B2">
    <label>2.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Максименко П. И. Жанровая классификация литературных текстов с применением нейросетевых методов (на материале русскоязычной электронной базы фанфикшн). Человек: образ и сущность. Гуманитарные аспекты. 2025. № 1. С. 184–200. https://doi.org/10.31249/chel/2025.01.13</mixed-citation>
     <mixed-citation xml:lang="en">Maksimenko P. I. Genre classification of literary texts through neural network methods (based on the Russian-language electronic fanfiction database). Human being: Image and Essence. Humanitarian Aspects, 2025, (1): 184–200. (In Russ.) https://doi.org/10.31249/chel/2025.01.13</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B3">
    <label>3.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Марков А. К., Семеночкин Д. О., Кравец А. Г., Яновский Т. А. Сравнительный анализ применяемых технологий обработки естественного языка для улучшения качества классификации цифровых документов. International Journal of Open Information Technologies. 2024. Т. 12. № 3. С. 66–77. https://elibrary.ru/tubosi</mixed-citation>
     <mixed-citation xml:lang="en">Markov A. K., Semyonochkin D. O., Kravets A. G., Yanovskiy T. A. Comparative analysis of applied natural language processing technologies for improving the quality of digital document classification. International Journal of Open Information Technologies, 2024, 12(3): 66–77. (In Russ.) https://elibrary.ru/tubosi</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B4">
    <label>4.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Плешакова Е. С., Гатауллин С. Т., Осипов А. В., Романова Е. В., Самбуров Н. С. Эффективная классификация текстов на естественном языке и определение тональности речи с использованием выбранных методов машинного обучения. Вопросы безопасности. 2022. № 4. С. 1–14. https://doi.org/10.25136/2409-7543.2022.4.38658</mixed-citation>
     <mixed-citation xml:lang="en">Pleshakova E. S., Gataullin S. T., Osipov A. V., Romanova E. V., Samburov N. S. Effective classification of natural language texts and determination of speech tonality using selected machine learning methods. Security Issues, 2022, (4): 1–14. (In Russ.) https://doi.org/10.25136/2409-7543.2022.4.38658</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B5">
    <label>5.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Челышев Э. А., Оцоков Ш. А., Раскатова М. В., Щеголев П. Сравнение методов классификации русскоязычных новостных текстов с использованием алгоритмов машинного обучения. Вестник кибернетики. 2022. № 1. С. 63–71. https://doi.org/10.34822/1999-7604-2022-1-63-71</mixed-citation>
     <mixed-citation xml:lang="en">Chelyshev E. A., Otsokov Sh. A., Raskatova M. V., Shchegolev P. Comparing classification methods for news texts in Russian using machine learning algorithms. Proceedings in Cybernetics, 2022, (1): 63–71. (In Russ.) https://doi.org/10.34822/1999-7604-2022-1-63-71</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B6">
    <label>6.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Anisuzzaman D. M., Malins J. G., Friedman P. A., Attia Z. I. Fine-tuning large language models for specialized use cases. Mayo Clinic Proceedings: Digital Health, 2025, 3(1). https://doi.org/10.1016/j.mcpdig.2024.11.005</mixed-citation>
     <mixed-citation xml:lang="en">Anisuzzaman D. M., Malins J. G., Friedman P. A., Attia Z. I. Fine-tuning large language models for specialized use cases. Mayo Clinic Proceedings: Digital Health, 2025, 3(1). https://doi.org/10.1016/j.mcpdig.2024.11.005</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B7">
    <label>7.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Blinova O., Tarasov N. A hybrid model of complexity estimation: Evidence from Russian legal texts. Frontiers in Artificial Intelligence, 2022, 5. https://doi.org/10.3389/frai.2022.1008530</mixed-citation>
     <mixed-citation xml:lang="en">Blinova O., Tarasov N. A hybrid model of complexity estimation: Evidence from Russian legal texts. Frontiers in Artificial Intelligence, 2022, 5. https://doi.org/10.3389/frai.2022.1008530</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B8">
    <label>8.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Brown T. B., Mann B., Ryder N., Subbiah M., Kaplan J., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., Agarwal A., Herberts-Voss A., Krueger G., Henighan T., Child R., Ramesh A., Ziegler D. M., Wu J., Winter C., Hesse C., Chen M., Sigler E., Litwin M., Gray S., Chess B., Clark J., Berner C., McCandlish S., Radford A., Sutskever I., Amodei D. Language models are few-shot learners. arXiv, 2020. https://doi.org/10.48550/arXiv.2005.14165</mixed-citation>
     <mixed-citation xml:lang="en">Brown T. B., Mann B., Ryder N., Subbiah M., Kaplan J., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., Agarwal A., Herberts-Voss A., Krueger G., Henighan T., Child R., Ramesh A., Ziegler D. M., Wu J., Winter C., Hesse C., Chen M., Sigler E., Litwin M., Gray S., Chess B., Clark J., Berner C., McCandlish S., Radford A., Sutskever I., Amodei D. Language models are few-shot learners. arXiv, 2020. https://doi.org/10.48550/arXiv.2005.14165</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B9">
    <label>9.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Chung H. W., Hou L., Longpre S., Zoph B., Tay Y., Fedus W., Li Y., Wang X., Dehghani M., Brahma S., Webson A., Gu S. S., Dai Z., Suzgun M., Chen X., Chowdhery A., Castro-Ros A., Pellat M., Robinson K., Valter D., Narang S., Mishra G., Yu A., Zhao V., Huang Y., Dai A., Yu H., Petrov S., Chi Ed H., Dean J., Devlin J., Roberts A., Zhou D., Le Q. V., Wei J. Scaling instruction-finetuned language models. arXiv, 2022. https://doi.org/10.48550/arXiv.2210.11416</mixed-citation>
     <mixed-citation xml:lang="en">Chung H. W., Hou L., Longpre S., Zoph B., Tay Y., Fedus W., Li Y., Wang X., Dehghani M., Brahma S., Webson A., Gu S. S., Dai Z., Suzgun M., Chen X., Chowdhery A., Castro-Ros A., Pellat M., Robinson K., Valter D., Narang S., Mishra G., Yu A., Zhao V., Huang Y., Dai A., Yu H., Petrov S., Chi Ed H., Dean J., Devlin J., Roberts A., Zhou D., Le Q. V., Wei J. Scaling instruction-finetuned language models. arXiv, 2022. https://doi.org/10.48550/arXiv.2210.11416</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B10">
    <label>10.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Ding N., Qin Y., Yang G., Wei F., Yang Z., Su Y., Hu S., Chen Y., Chan C.-M., Chen W., Yi J., Zhao W., Wang X., Liu Z., Zheng H.-T., Chen J., Liu Y., Tang J., Li J., Sun M. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 2023, 5: 220–235. https://doi.org/10.1038/s42256-023-00626-4</mixed-citation>
     <mixed-citation xml:lang="en">Ding N., Qin Y., Yang G., Wei F., Yang Z., Su Y., Hu S., Chen Y., Chan C.-M., Chen W., Yi J., Zhao W., Wang X., Liu Z., Zheng H.-T., Chen J., Liu Y., Tang J., Li J., Sun M. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 2023, 5: 220–235. https://doi.org/10.1038/s42256-023-00626-4</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B11">
    <label>11.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Hoffmann J., Borgeaud S., Mensch A., Buchatskaya E., Cai T., Rutherford E., de Las Casas D., Hendricks L. A., Welbi J., Clark A., Hennigan T., Noland E., Millican K., Van den Driessche G., Damoc B., Guy A., Osindero S., Simonyan K., Rae J. W., Vinyals O., Sifre L. Training compute-optimal large language models. arXiv, 2022. https://doi.org/10.48550/arXiv.2203.15556</mixed-citation>
     <mixed-citation xml:lang="en">Hoffmann J., Borgeaud S., Mensch A., Buchatskaya E., Cai T., Rutherford E., de Las Casas D., Hendricks L. A., Welbi J., Clark A., Hennigan T., Noland E., Millican K., Van den Driessche G., Damoc B., Guy A., Osindero S., Simonyan K., Rae J. W., Vinyals O., Sifre L. Training compute-optimal large language models. arXiv, 2022. https://doi.org/10.48550/arXiv.2203.15556</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B12">
    <label>12.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Kaplan J., McCandlish S., Henighan T., Brown T. B., Chess B., Child R., Gray S., Radford A., Wu J., Amodei D. Scaling laws for neural language models. arXiv, 2020. https://doi.org/10.48550/arXiv.2001.08361</mixed-citation>
     <mixed-citation xml:lang="en">Kaplan J., McCandlish S., Henighan T., Brown T. B., Chess B., Child R., Gray S., Radford A., Wu J., Amodei D. Scaling laws for neural language models. arXiv, 2020. https://doi.org/10.48550/arXiv.2001.08361</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B13">
    <label>13.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Lialin V., Deshpande V., Yao X., Rumshisky A. Scaling down to scale up: A guide to parameter-efficient fine-tuning. arXiv, 2023. https://doi.org/10.48550/arXiv.2303.15647</mixed-citation>
     <mixed-citation xml:lang="en">Lialin V., Deshpande V., Yao X., Rumshisky A. Scaling down to scale up: A guide to parameter-efficient fine-tuning. arXiv, 2023. https://doi.org/10.48550/arXiv.2303.15647</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B14">
    <label>14.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Liu Y., Ott M., Goyal N., Du J., Joshi M., Chen D., Levy O., Lewis M., Zettlemoyer L., Stoyanov V. RoBERTa: A robustly optimized BERT pretraining approach. arXiv, 2019. https://doi.org/10.48550/arXiv.1907.11692</mixed-citation>
     <mixed-citation xml:lang="en">Liu Y., Ott M., Goyal N., Du J., Joshi M., Chen D., Levy O., Lewis M., Zettlemoyer L., Stoyanov V. RoBERTa: A robustly optimized BERT pretraining approach. arXiv, 2019. https://doi.org/10.48550/arXiv.1907.11692</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B15">
    <label>15.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Lu W., Luu R. K., Buehler M. J. Fine-tuning large language models for domain adaptation: Exploration of training strategies, scaling, model merging and synergistic capabilities. arXiv, 2024. https://doi.org/10.48550/arXiv.2409.03444</mixed-citation>
     <mixed-citation xml:lang="en">Lu W., Luu R. K., Buehler M. J. Fine-tuning large language models for domain adaptation: Exploration of training strategies, scaling, model merging and synergistic capabilities. arXiv, 2024. https://doi.org/10.48550/arXiv.2409.03444</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B16">
    <label>16.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Nikolich A., Korolev K., Bratchikov S., Kiselev I., Shelmanov A. Vikhr: The family of open-source instruction-tuned large language models for Russian. arXiv, 2024. https://doi.org/10.48550/arXiv.2405.13929</mixed-citation>
     <mixed-citation xml:lang="en">Nikolich A., Korolev K., Bratchikov S., Kiselev I., Shelmanov A. Vikhr: The family of open-source instruction-tuned large language models for Russian. arXiv, 2024. https://doi.org/10.48550/arXiv.2405.13929</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B17">
    <label>17.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Nikolich A., Puchkova A. Fine-tuning GPT-3 for Russian text summarization. arXiv, 2021. https://doi.org/10.48550/arXiv.2108.03502</mixed-citation>
     <mixed-citation xml:lang="en">Nikolich A., Puchkova A. Fine-tuning GPT-3 for Russian text summarization. arXiv, 2021. https://doi.org/10.48550/arXiv.2108.03502</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B18">
    <label>18.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Pratap S., Aranha A. R., Kumar D., Malhotra G., Iyer A. P. N., Shylaja S. S. The fine art of fine-tuning: A structured review of advanced LLM fine-tuning techniques. Natural Language Processing Journal, 2025, 11. https://doi.org/10.1016/j.nlp.2025.100144</mixed-citation>
     <mixed-citation xml:lang="en">Pratap S., Aranha A. R., Kumar D., Malhotra G., Iyer A. P. N., Shylaja S. S. The fine art of fine-tuning: A structured review of advanced LLM fine-tuning techniques. Natural Language Processing Journal, 2025, 11. https://doi.org/10.1016/j.nlp.2025.100144</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B19">
    <label>19.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Sardana N., Portes J., Doubov S., Franke J. Beyond chinchilla-optimal: Accounting for inference in language model scaling laws. arXiv, 2023. https://doi.org/10.48550/arXiv.2401.00448</mixed-citation>
     <mixed-citation xml:lang="en">Sardana N., Portes J., Doubov S., Franke J. Beyond chinchilla-optimal: Accounting for inference in language model scaling laws. arXiv, 2023. https://doi.org/10.48550/arXiv.2401.00448</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B20">
    <label>20.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Smetanin S., Komarov M. Deep transfer learning baselines for sentiment analysis in Russian. Information Processing &amp; Management, 2021, 58(3). https://doi.org/10.1016/j.ipm.2020.102484</mixed-citation>
     <mixed-citation xml:lang="en">Smetanin S., Komarov M. Deep transfer learning baselines for sentiment analysis in Russian. Information Processing &amp; Management, 2021, 58(3). https://doi.org/10.1016/j.ipm.2020.102484</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B21">
    <label>21.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Srinivasan K. P. V., Gumpena P., Yattapu M., Brahmbhatt V. H. Comparative analysis of different efficient fine tuning methods of large language models (LLMs) in low-resource setting. arXiv, 2024. https://doi.org/10.48550/arXiv.2405.13181</mixed-citation>
     <mixed-citation xml:lang="en">Srinivasan K. P. V., Gumpena P., Yattapu M., Brahmbhatt V. H. Comparative analysis of different efficient fine tuning methods of large language models (LLMs) in low-resource setting. arXiv, 2024. https://doi.org/10.48550/arXiv.2405.13181</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B22">
    <label>22.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Wang L., Chen S., Jiang L., Pan S., Cai R., Yang S., Yang F. Parameter-efficient fine-tuning in large language models: A survey of methodologies. Artificial Intelligence Review, 2025, 58. https://doi.org/10.1007/s10462-025-11236-4</mixed-citation>
     <mixed-citation xml:lang="en">Wang L., Chen S., Jiang L., Pan S., Cai R., Yang S., Yang F. Parameter-efficient fine-tuning in large language models: A survey of methodologies. Artificial Intelligence Review, 2025, 58. https://doi.org/10.1007/s10462-025-11236-4</mixed-citation>
    </citation-alternatives>
   </ref>
  </ref-list>
 </back>
</article>
