ICDSC 2014
Eighth ACM/IEEE International Conference on Distributed Smart Cameras
November 4 - November 7, 2014, Venezia, Italy

Sponsors:

Keynote Speakers

Wednesday, November 5

How smart sensors will facilitate future automated border crossings?

Andreas Kriechbaum-Zabini, AIT Austrian Institute of Technology Gmb

Abstract:

Due to specific immigration problems in the last years (e.g., Italy, Ukraine), European frontiers have become unstable. There is a real and ever-growing need for secure borders. On the other hand, Europe's economic relevance as well as the Schengen Code, regulating the right of free movement within the EU, drive to an increasing passenger flow at border crossings: thus, air border crossings are expected to increase by 80% from 400 million in 2009 to 720 million in 2030. Consequently, the demand both on passenger facilitation and on security necessitate innovation at border crossings. Automation seems to be one option to mitigate the passenger flow problem. However, current installations reveal multiple difficulties. Moreover, the redesign of such a complex security process requires a multi-stakeholder environment. Indeed, various security technologies are used for such a complex process, joined together in automated border control (ABC) solutions, integrating biometrics, surveillance, certificate exchange, data protection, secure user interaction and information security. In this talk, we will introduce you to a state-of-the-art ABC system, describe the evolving issues of the field and discuss on how smart sensors – in particular optical systems - can help to solve many of these challenges and be used to improve our borders security while enhancing the passengers' experience. Thus, we will present the current drawbacks and advantages of this technology in order to derive possible options for improvements. Many important challenges in ABC are interdisciplinary and related to several topics of this conference: distributed video analytics, multi-sensor data aggregation, information fusion, object recognition, vision-based smart environments, surveillance, tracking applications and middleware applications. Therefore, ABC is a very interesting concrete case in which we can integrate your solutions.

Biography:

Andreas Kriechbaum-Zabini is project manager at the Safety & Security Department of AIT Austrian Institute of Technology GmbH. Within the Business Unit "Video and Security Technology", he is the key contact for airport related topics. Since 2011, he works as an expert in the field of automated border control (ABC), coordinates national and international projects and has valuable insights in existing operative solutions, end user requirements, important stakeholders and scientific analysis of such environments. Currently, he is the technical coordinator of the EU-FP7 Integrated Project FastPass ( https://www.fastpass-project.eu/). Targeting the harmonisation of ABC systems in Europe, this project gathers more than 25 partners along the entire ABC value chain. Andreas studied Telematics, with specialisation in "Computer Vision and Graphics" and "Telecommunication Systems and Mobile Computing" at Graz University of Technology, Austria. His professional career started in 2000 in the area of computer vision and automated surveillance where he worked in several national funded projects, such as "Future Border Control" or "AREA MUMOSIS next" within the Austrian security research program (KIRAS), as well as in EU projects, such as "FascinatE", "porTiVity", "K-Space", "Polymnia", "MECiTV", "Detect" and "VIZARD".




Thursday, November 6

Hierarchical Compositional Representations of Object Structure

Ales Leonardis, University of Birmingham

Abstract:

Visual categorisation has been an area of intensive research in the vision community for several decades. Ultimately, the goal is to efficiently detect and recognize an increasing number of object classes. The problem entangles three highly interconnected issues: the internal object representation, which should compactly capture the visual variability of objects and generalize well over each class; a means for learning the representation from a set of input images with as little supervision as possible; and an effective inference algorithm that robustly matches the object representation against the image and scales favorably with the number of objects. In this talk I will present our approach which combines a learned compositional hierarchy, representing (2D) shapes of multiple object classes, and a coarse-to-fine matching scheme that exploits a taxonomy of objects to perform efficient object detection. I will conclude with a discussion about a number of possible extensions of compositional hierarchical representations to other visual and non-visual modalities.

Biography:

Ales Leonardis is Professor of Robotics at the University of Birmingham and co-Director of the Centre for Computational Neuroscience and Cognitive Robotics. He is also adjunct professor at the Faculty of Computer Science, Graz University of Technology. He worked at the GRASP Lab at the University of Pennsylvania, at PRIP, TU Wien, ETH Zurich and University of Ljubljana. His research interests include robust and adaptive methods for computer vision, object and scene recognition and categorization, statistical visual learning, 3D object modeling, and biologically motivated vision. He is (co)author of more than 200 refereed papers. He has been an associate editor of the IEEE PAMI, an editorial board member of Pattern Recognition, and an editor of the Springer book series Computational Imaging and Vision. His paper "Multiple Eigenspaces" won the 29th Annual Pattern Recognition Society award. In 2004 he was awarded a prestigious national award for his research achievements.

Friday, November 7

Structured Robust PCA and Dynamics-based Invariants for Multi-Camera Video Understanding

Octavia I. Camps, Dept. of Electrical and Computer Engineering - Northeastern University

Abstract:

The power of geometric invariants to provide solutions to computer vision problems has been recognized for a long time. On the other hand, dynamics-based invariants remain largely untapped. Yet, visual data come in streams: videos are temporal sequences of frames, images are ordered sequences of rows of pixels and contours are chained sequences of edges. In this talk, I will show how making this ordering explicit allows to exploit dynamics-based invariants to capture useful information from video and image data. In particular, I will describe how to efficiently estimate dynamics-based invariants from incomplete and corrupted data by formulating the problem as a structured robust PCA problem, where a structured matrix built from the data is decomposed into structured low rank and sparse matrices. Finally, I will show how to use these invariants to perform data association and classification in the context of computer vision applications for multi-camera tracking and cross-view activity recognition

Biography:

Octavia Camps received a B.S. degree in computer science and a B.S. degree in electrical engineering from the Universidad de la Republica (Uruguay), and a M.S. and a Ph.D. degree in electrical engineering from the University of Washington. Prof. Camps was a visiting researcher at the Computer Science Department at Boston University during Spring 2013. Since 2006, she is a Professor in the Electrical and Computer Engineering Department at Northeastern University. From 1991 to 2006 she was a faculty of Electrical Engineering and of Computer Science and Engineering at The Pennsylvania State University. In 2000, she was a visiting faculty at the California Institute of Technology and at the University of Southern California. Her main research interests include robust computer vision, image processing, and machine learning. She is a former associate editor of Pattern Recognition and Machine Vision Applications. She is a member of the IEEE society.




Credits