DALES: automated tool for detection, annotation, labelling and segmentation of multiple objects in multi-camera video streams

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

11 Citations (Scopus)

Abstract

In this paper, we propose a new software tool called DALES to extract semantic information
from multi-view videos based on the analysis of their visual content. Our system is fully automatic and is well suited for multi-camera environment. Once the multi-view video sequences are loaded into DALES, our software performs the detection, counting, and segmentation of the visual objects evolving in the provided video streams. Then, these objects of interest are processed in order to be labelled, and the related frames are thus annotated with the corresponding semantic content. Moreover, a textual script is automatically generated with the video annotations. DALES system shows excellent performance in terms of accuracy and computational speed and is robustly designed to ensure view synchronization.
Original languageEnglish
Title of host publicationProceedings of the 25th International Conference on Computational Linguistics
Subtitle of host publicationDublin, Ireland, August 23-29 2014
PublisherThe Association for Computational Linguistics
Pages87-94
Number of pages8
DOIs
Publication statusPublished - 2014
Externally publishedYes

Fingerprint

Dive into the research topics of 'DALES: automated tool for detection, annotation, labelling and segmentation of multiple objects in multi-camera video streams'. Together they form a unique fingerprint.

Cite this