XFormParser: A Simple and Effective Multimodal Multilingual Semi-structured Form Parser

在文档人工智能领域,半结构化形式解析起着关键作用。这项任务利用了来自关键信息提取(KIE)的技术,处理输入范围从纯文本到包含图像和结构布局的复杂模态数据。预训练多模态模型的出现推动了从不同格式文档中提取关键信息的工作。然而,形式解析的努力仍然受到一些显著挑战的限制,比如多语言解析能力不足和富含文本和视觉的上下文中的召回度降低。在这项工作中,我们介绍了一个简单但有效的多模态且多语言的半结构化形式解析器(XFormParser),它基于全面的预训练语言模型,并创新地将语义实体识别(SER)和关系提取(RE)集成到一个统一的框架中,通过采用新的分级预热训练方法显著提高了形式解析准确性,而不会增加推理开销。此外,我们还开发了一个专为各种工业场景的多语言形式解析需求而设计的基准数据集,名为InDFormBench。通过在多个多语言基准测试和InDFormBench上进行严格的测试,XFormParser已经证明了其无与伦比的效力,特别是在语言特定的设置下,其关系提取(RE)任务方面的性能超过了最先进的(SOTA)模型,实现了F1得分提高1.79%。我们的框架在多语言和零散设置的任务上表现尤为卓越。与现有的SOTA基准相比,我们的框架在多语言和零散设置的任务上的表现都有显著提高。代码公开在以下链接处:

In the domain of document AI, semi-structured form parsing plays a crucial role. This task leverages techniques from key information extraction (KIE), dealing with inputs that range from plain text to intricate modal data comprising images and structural layouts. The advent of pre-trained multimodal models has driven the extraction of key information from form documents in different formats such as PDFs and images. Nonetheless, the endeavor of form parsing is still encumbered by notable challenges like subpar capabilities in multi-lingual parsing and diminished recall in contexts rich in text and visuals. In this work, we introduce a simple but effective \textbf{M}ultimodal and \textbf{M}ultilingual semi-structured \textbf{FORM} \textbf{PARSER} (\textbf{XFormParser}), which is anchored on a comprehensive pre-trained language model and innovatively amalgamates semantic entity recognition (SER) and relation extraction (RE) into a unified framework, enhanced by a novel staged warm-up training approach that employs soft labels to significantly refine form parsing accuracy without amplifying inference overhead. Furthermore, we have developed a groundbreaking benchmark dataset, named InDFormBench, catering specifically to the parsing requirements of multilingual forms in various industrial contexts. Through rigorous testing on established multilingual benchmarks and InDFormBench, XFormParser has demonstrated its unparalleled efficacy, notably surpassing the state-of-the-art (SOTA) models in RE tasks within language-specific setups by achieving an F1 score improvement of up to 1.79\%. Our framework exhibits exceptionally improved performance across tasks in both multi-language and zero-shot contexts when compared to existing SOTA benchmarks. The code is publicly available at this https URL.

https://arxiv.org/abs/2405.17336

https://arxiv.org/pdf/2405.17336.pdf

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注