ICDSC 2012: 6th ACM/IEEE International Conference on

Distributed Smart Cameras

Oct 30 - Nov 2, 2012, Hong Kong

Tutorials

 

Tutorial #1

Prof Faisal Qureshi
Tutorial #2

Prof. Christian Micheloni
Tutorial #3

Dr. Senem Velipasalar

 

 


 

Prof Faisal Qureshi

Virtual Vision

 

Abstract

Virtual vision prescribes using visually and behaviorally realistic 3D environments for carrying out camera networks research.  These 3D environments, populated with life-like self-animating objects, including pedestrians, automobiles, etc., can serve as software laboratories within which camera networks of suitable complexity can be simulated and studied.  We refer to such 3D synthetic environments that allow us to deploy, experiment with, study, and evaluate simulated camera networks as virtual vision simulators.  Our own research on high-level control and coordination in pan/tilt/zoom camera networks has relied heavily on virtual vision simulators that we have developed.  In April 2012 we released the virtual vision simulator that we have used for studying control and coordination issues in camera networks under the Gnu Public License v.3.  The source code is available at https://github.com/vclab/virtual-vision-simulator.  This tutorial is focused on the virtual vision paradigm for camera networks research.  The first half of the tutorial will discuss the virtual vision paradigm, its strengths, shortcomings, and limitations.  To provide context, we also briefly review our own work on camera networks.  The second half of the tutorial will focus on the virtual vision simulator that we have used for our research.  We will do hands-on exercises about how to use our virtual vision simulator for your own research.  We will also describe how to create novel scenarios using this virtual vision simulator and how to simulate camera networks comprising active and passive cameras.

 

Schedule

Part 1:
Virtual vision paradigm for camera networks research: an introduction and critique

Part 2:
Using UOIT’s VCLAB virtual vision simulator for your own camera networks research
For the second half, please ensure that you have a laptop and that you have already downloaded and installed the virtual vision simulator available at https://github.com/vclab/virtual-vision-simulator.

 

Tutorial Material

We plan to distribute slides and other relevant material closer to the date of the tutorial.

 

Prof. Klara Nahrste dtOrganizer’s Biography

Faisal Qureshi, is an Assistant Professor of Computer Science at the University of Ontario Institute of Technology (UOIT), Oshawa, Canada. He obtained a PhD in Computer Science from the University of Toronto in 2007.  He also holds an M.Sc. in Computer Science from the University of Toronto, and an M.Sc. in Electronics from Quaid-e-Azam University, Pakistan.  Prior to joining UOIT, he worked as a Software Developer at Autodesk. His research interests include sensor networks, computer vision, and computer graphics. He has also published papers in space robotics.  He has interned at ATR Labs (Kyoto, Japan), AT&T Research Labs (Red Bank, NJ, USA), and MDA Space Missions (Brampton, ON, Canada).  He is a member of the IEEE and the ACM.

 


 

Prof. Christian Micheloni

Video analysis in Pan-Tilt-Zoom camera networks

 

Abstract

Video-surveillance networks are usually based on static cameras that always provide footages with the same point of view and resolution. Pan-Tilt-Zoom (PTZ) cameras are able to dynamically modify their field of view. This functionality introduces new capabilities to camera networks such as increasing the resolution of moving targets and adapting the sensor coverage. On the other hand, PTZ functionality requires solutions to new challenges such as controlling the PTZ parameters, estimating the ego-motion of the cameras and calibrating the moving cameras.
This tutorial provides an overview of the main video processing techniques and the currents trends in this active field of research. Autonomous PTZ cameras mainly aim to detect and track targets with the largest possible resolution. The most recent techniques for image registration and ego-motion compensation will be presented for detection purposes.
A Further aspect in PTZ camera networks is represented by the coverage problem. A moving camera can adapt its field of view hence change the coverage. The tutorial will provide an overview on a possible solution for the automatic reconfiguration of PTZ networks for optimal 3D coverage.

The first part of the tutorial will describe how the paradigms valid for static cameras can be projected into PTZ world. In particular, calibration and motion detection problems will be presented. The second half of the tutorial will present a cooperative PTZ method to improve localization in an outdoor environment. Finally, a scheme to optimally determine the set of PTZ parameters for 3D area coverage will be presented.
All the section will present a real MATLAB implementation of the proposed techinques.

Tutorial Material

Slides and matlab code are available here.

 

Prof. Klara Nahrste dtPresenter's biography

Dr. Christian Micheloni, Dept. Of Mathematics and Computer Science, Università degli Studi di Udine.

Christian Micheloni(M.Sc.’02, Ph.D. ‘06) received the Laurea degree (cum Laude) as well as a Ph.D. in Computer Science respectively in 2002 and 2006 from the University of Udine, Udine, Italy. He is assistant professor at the University of Udine. Since 2000 he has taken part to European research being under contract for several European Projects. He has co-authored more than 60 scientific works published in International Journals and Refereed International Conferences. He serves as a reviewer for several International Journals and Conferences. Dr. Micheloni's main interests involve active vision for scene understanding by means moving cameras (PTZ,UAV,UGV,etc.) and machine learning for classification and recognition of moving objects . He is also interested in pattern recognition techniques for trajectory analysis and clustering, for camera parameters configuration and recently for reactive networks management (reconfiguration and cooperation).  All these techniques are mainly developed and applied for video surveillance purposes. He is member of the International Association of Pattern Recognition (IAPR) and member of the IEEE.

 


 

Dr. Senem Velipasalar

Smart Cameras Getting Smarter

 

Abstract

With the introduction of battery-powered, embedded smart cameras, it has now become viable to install many spatially-distributed cameras interconnected by wireless links. A smart camera is a stand-alone unit that combines sensing, processing, and communication on a single embedded platform. Yet, it has limited resources, such as energy and processing power. Thus, many challenges need to be addressed to have operational embedded smart camera systems, and wireless smart-camera networks (Wi-SCaNs).

Since battery life is limited and video processing tasks, such as foreground detection and tracking, consume considerable amount of energy, it is essential to have lightweight algorithms to increase the energy efficiency of each camera node and, thus the overall lifetime of the network. As will be discussed in the tutorial, even with no computer vision processing, just grabbing and buffering a frame require significant amount of energy. Thus, it is not sufficient to only focus on the vision algorithms to significantly increase lifetime of nodes. Methodologies are needed to adaptively determine when and how long a camera node can be idle.

In this tutorial, the following topics will be covered: (i) introduction to smart cameras, (ii) an overview and comparison of different smart camera platforms and systems, (iii) effect of algorithm choice on the performance, (iv) adaptive methodologies to decrease energy consumption, (v) whether/how a camera can determine itself when and for how long to be idle, (vi) static versus mobile cameras, and additional challenges introduced by mobility, (vii) different application areas of smart cameras ranging from detection of events of interest to detecting brake lights and turn signals of vehicles ahead.

 

Tutorial Material

We plan to distribute the slides closer to the date of the tutorial.

 

Prof. Klara Nahrste dtPresenter's biography

Dr. Senem Velipasalar is an Assistant Professor in the Department of Electrical Engineering and Computer Science at Syracuse University. She received the Ph.D. and M.A. degrees in Electrical Engineering from Princeton University in 2007 and 2004, respectively, the M.S. degree in Electrical Sciences and Computer Engineering from Brown University in 2001, and the B.S. degree in Electrical and Electronic Engineering with high honors from Bogazici University in 1999. During the summers of 2001 through 2005, she worked in the Exploratory Computer Vision Group at IBM T.J. Watson Research Center. Between 2007 and 2011, she was an Assistant Professor in the Department of Electrical Engineering at UNL.

The focus of her research has been on wireless smart camera networks, battery-powered embedded smart cameras, distributed multi-camera tracking and surveillance systems, and automatic event detection from videos. Her research interests include embedded computer vision, video/image processing, distributed multi-camera systems, pattern recognition, statistical learning, signal processing and information theory.

Dr. Velipasalar received a Faculty Early Career Development Award (CAREER) from the National Science Foundation (NSF) in 2011. She is the coauthor of the paper, which received the 3rd place award at the 2011 ACM/IEEE International Conference on Distributed Smart Cameras. She received the Best Student Paper Award at the IEEE International Conference on Multimedia & Expo (ICME) in 2006. She is also the recipient of the EPSCoR First Award, two UNL Layman awards, IBM Patent Application Award, and Princeton and Brown University Graduate Fellowships. Dr. Velipasalar is a member of the IEEE.