The paper "Comparison Between Parameter-Efficient Techniques and Full Fine-Tuning: A Case Study on Multilingual News Article Classification," produced by our consortium partners - Department of Computer Science at the University of Sheffield and Kempelen Institute of Intelligent Technologies - examines the effectiveness of parameter-efficient fine-tuning techniques (PEFTs) compared to full fine-tuning (FFT) for multilingual text classification tasks. The focus is on adapters, Low-Rank Adaptation (LoRA), and BitFit methods, which aim to reduce the computational cost while maintaining or improving classification performance.
The primary objectives of the study were to:
- Compare the classification performance and computational costs of PEFTs and FFT.
- Evaluate these techniques on multilingual text classification tasks, including genre detection, framing detection, and persuasion techniques detection.
- Investigate the efficacy of PEFTs across different training scenarios, including multilingual, monolingual, and cross-lingual setups.
Methodology
The study used XLM-RoBERTa, a transformer-based language model, fine-tuned using FFT and three PEFT methods: adapters, LoRA, and BitFit. The researchers conducted experiments on three tasks:
- News Genre Classification: Differentiating between opinion, objective reporting, and satire.
- Framing Detection: Identifying one or more of fourteen framing dimensions in news articles.
- Persuasion Techniques Detection: Detecting twenty-three persuasion techniques in paragraphs of news articles.
The data for these tasks were derived from the SemEval-2023 Task 3, which included multilingual datasets in six languages for training and three additional languages for testing (see two papers published under the SemEval-2023 Task 3 by the University of Sheffield here and KInIT here)
Key Findings
- Performance:
- FFT generally performs better for tasks involving longer texts (genre and framing detection).
- LoRA outperforms FFT and adapters for the persuasion techniques detection task, which involves shorter texts.
- Adapters show mixed results, sometimes improving performance in monolingual scenarios but often underperforming compared to FFT and LoRA in multilingual setups.
- Computational Efficiency:
- PEFTs significantly reduce the number of trainable parameters and peak VRAM usage.
- LoRA and adapters reduce training time to 56-71% of the time required for FFT.
- Despite lower computational costs, PEFTs generally result in slightly lower classification performance compared to FFT.
- Training Scenarios:
- Multilingual joint training scenarios yield the best overall results across all tasks and techniques.
- Monolingual training (English-only) and cross-lingual training (English + translations) scenarios show lower performance, particularly for languages unseen during training.
Conclusion
The study demonstrated that while PEFTs like LoRA and adapters can significantly reduce computational costs, they may also slightly compromise classification performance. LoRA showed particular promise for shorter texts, while FFT remained superior for longer texts. The findings suggest that a balanced approach, leveraging the strengths of each method according to the specific task and data characteristics, can optimise both efficiency and accuracy in multilingual text classification.
This research contributed to the ongoing efforts to make advanced language models more accessible and sustainable, particularly for researchers and practitioners with limited computational resources. The study highlights the potential of PEFTs to maintain robust performance while significantly reducing the computational burden.
The paper is available at: https://doi.org/10.1371/journal.pone.0301738
Link to Software accompanying the paper on Zenodo: https://zenodo.org/records/10066649
Link to the paper on Zenodo: https://zenodo.org/records/11148437