Timothy Do, Pranav Saran, Harshita Poojary, Pranav Prabhu, Sean O'Brien, Vasu Sharma, Kevin Zhu
Algoverse AI Research
In this paper, we address the persistent challenges that figurative language expressions pose for natural language processing (NLP) systems, particularly in low-resource languages such as Konkani. We present a hybrid model that integrates a pre-trained Multilingual BERT (mBERT) with a bidirectional LSTM and a linear classifier. This architecture is fine-tuned on a newly introduced annotated dataset for metaphor classification, developed as part of this work. To improve the model's efficiency, we implement a gradient-based attention head pruning strategy. For metaphor classification, the pruned model achieves an accuracy of 78%. We also applied our pruning approach to expand on an existing idiom classification task, achieving 83% accuracy. These results demonstrate the effectiveness of attention head pruning for building efficient NLP tools in underrepresented languages.
Metric | Idiom Classification | Metaphor Classification | ||
---|---|---|---|---|
Original Model | Pruned Model | Original Model | Pruned Model | |
Precision | 0.87 | 0.86 | 1.00 | 0.87 |
Recall | 0.89 | 0.91 | 0.75 | 0.65 |
F1-Score | 0.88 | 0.88 | 0.86 | 0.74 |
Accuracy | 0.82 | 0.83 | 0.88 | 0.78 |
Macro Avg Precision | 0.78 | 0.79 | 0.90 | 0.79 |
Macro Avg Recall | 0.77 | 0.77 | 0.88 | 0.78 |
Weighted Avg Precision | 0.82 | 0.82 | 0.90 | 0.79 |
Weighted Avg Recall | 0.82 | 0.83 | 0.88 | 0.78 |
@misc{do2025pruningperformanceefficientidiom, title={Pruning for Performance: Efficient Idiom and Metaphor Classification in Low-Resource Konkani Using mBERT}, author={Timothy Do and Pranav Saran and Harshita Poojary and Pranav Prabhu and Sean O'Brien and Vasu Sharma and Kevin Zhu}, year={2025}, eprint={2506.02005}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.02005} }