Détection d'Objets dans les documents numérisés par réseaux de neurones profonds

01/27/2023
by   Mélodie Boillet, et al.
0

In this thesis, we study multiple tasks related to document layout analysis such as the detection of text lines, the splitting into acts or the detection of the writing support. Thus, we propose two deep neural models following two different approaches. We aim at proposing a model for object detection that considers the difficulties associated with document processing, including the limited amount of training data available. In this respect, we propose a pixel-level detection model and a second object-level detection model. We first propose a detection model with few parameters, fast in prediction, and which can obtain accurate prediction masks from a reduced number of training data. We implemented a strategy of collection and uniformization of many datasets, which are used to train a single line detection model that demonstrates high generalization capabilities to out-of-sample documents. We also propose a Transformer-based detection model. The design of such a model required redefining the task of object detection in document images and to study different approaches. Following this study, we propose an object detection strategy consisting in sequentially predicting the coordinates of the objects enclosing rectangles through a pixel classification. This strategy allows obtaining a fast model with only few parameters. Finally, in an industrial setting, new non-annotated data are often available. Thus, in the case of a model adaptation to this new data, it is expected to provide the system as few new annotated samples as possible. The selection of relevant samples for manual annotation is therefore crucial to enable successful adaptation. For this purpose, we propose confidence estimators from different approaches for object detection. We show that these estimators greatly reduce the amount of annotated data while optimizing the performances.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset