Video synthesis from Intensity and Event Frames
Abstract: Event cameras, neuromorphic devices that naturally respond to brightness changes, have multiple advantages with respect to traditional cameras. However, the difficulty of applying traditional computer vision algorithms on event data limits their usability. Therefore, in this paper we investigate the use of a deep learning-based architecture that combines an initial grayscale frame and a series of event data to estimate the following intensity frames. In particular, a fully-convolutional encoder-decoder network is employed and evaluated for the frame synthesis task on an automotive event-based dataset. Performance obtained with pixel-wise metrics confirms the quality of the images synthesized by the proposed architecture.
Citation:
Pini, Stefano; Borghi, Guido; Vezzani, Roberto; Cucchiara, Rita "Video synthesis from Intensity and Event Frames" Proceedings of the 20th International Conference on Image Analysis and Processing, Trento, Italy, 9-13 September 2019, 2019 DOI: 10.1007/978-3-030-30642-7_28not available
Paper download:
- Author version:
- DOI: 10.1007/978-3-030-30642-7_28