TEII: Think, Explain, Interact and Iterate with Large Language Models to Solve Cross-lingual Emotion Detection

跨语言情感检测允许我们在大规模上分析全球趋势、公众观点和社会现象。我们参与了跨语言情感检测(EXALT)共享任务,在情感检测子任务评估集中,F1得分达到了0.6046。我们的系统在基线之上超过了0.16 F1-score绝对,排名第二。我们还对基于大型语言模型(LLM)的模型以及基于嵌入的生物循环神经网络(BiLSTM)和非LLM技术进行了实验。此外,我们还引入了两种新颖的方法:多迭代代理工作流程和多二进制分类器代理工作流程。我们发现,LLM基站的情感检测表现良好。此外,将我们所尝试的所有模型组合的集成产生了比任何单独方法更高的F1得分。

Cross-lingual emotion detection allows us to analyze global trends, public opinion, and social phenomena at scale. We participated in the Explainability of Cross-lingual Emotion Detection (EXALT) shared task, achieving an F1-score of 0.6046 on the evaluation set for the emotion detection sub-task. Our system outperformed the baseline by more than 0.16 F1-score absolute, and ranked second amongst competing systems. We conducted experiments using fine-tuning, zero-shot learning, and few-shot learning for Large Language Model (LLM)-based models as well as embedding-based BiLSTM and KNN for non-LLM-based techniques. Additionally, we introduced two novel methods: the Multi-Iteration Agentic Workflow and the Multi-Binary-Classifier Agentic Workflow. We found that LLM-based approaches provided good performance on multilingual emotion detection. Furthermore, ensembles combining all our experimented models yielded higher F1-scores than any single approach alone.

https://arxiv.org/abs/2405.17129

https://arxiv.org/pdf/2405.17129.pdf

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注