DALES: automated tool for detection, annotation, labelling and segmentation of multiple objects in multi-camera video streams

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

In this paper, we propose a new software tool called DALES to extract semantic information
from multi-view videos based on the analysis of their visual content. Our system is fully automatic and is well suited for multi-camera environment. Once the multi-view video sequences are loaded into DALES, our software performs the detection, counting, and segmentation of the visual objects evolving in the provided video streams. Then, these objects of interest are processed in order to be labelled, and the related frames are thus annotated with the corresponding semantic content. Moreover, a textual script is automatically generated with the video annotations. DALES system shows excellent performance in terms of accuracy and computational speed and is robustly designed to ensure view synchronization.
Original languageEnglish
Title of host publicationProceedings of the 25th International Conference on Computational Linguistics
Subtitle of host publicationDublin, Ireland, August 23-29 2014
PublisherThe Association for Computational Linguistics
Pages87-94
Number of pages8
DOIs
Publication statusPublished - 2014
Externally publishedYes

    Fingerprint

Cite this

Bhat, M., & Olszewska, J. I. (2014). DALES: automated tool for detection, annotation, labelling and segmentation of multiple objects in multi-camera video streams. In Proceedings of the 25th International Conference on Computational Linguistics: Dublin, Ireland, August 23-29 2014 (pp. 87-94). The Association for Computational Linguistics. https://doi.org/10.3115/v1/W14-5413