Invited Speakers

Abstract:  The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self-organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. We employ the neighborhood dependability criteria in the neural network learning strategy. A locally weighted connectivity is modeled with a specific distance metric to improve the convergence/divergence characteristics and to reduce the training and testing computation time. Experiments performed on several face recognition databases have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on these databases have also provided excellent recognition rate in images captured in complex lighting environments. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.

In this talk, I will address the above mentioned issues. First, I will present a comprehensive framework for analysis of computational imaging systems and provide explicit performance guarantees for many CI systems such as light field and extended-depth-of-field cameras. Second, I will show how camera array can be exploited to capture the various dimensions of light such as spectrum and angle. Capturing these dimensions leads to novel imaging capabilities such as post-capture refocussing, hyper-spectral imaging and natural image retouching. Finally, I will talk about how various machine learning techniques such as robust regression and matrix factorization can be used for solving many imaging problems.

Biography: Dr. Vijayan K. Asari is a Professor in Electrical and Computer Engineering and Ohio Research Scholars Endowed Chair in Wide Area Surveillance at the University of Dayton, Dayton, Ohio, USA. He is the director of the Center of Excellence for Computer Vision and Wide Area Surveillance Research (Vision Lab) at UD. As leaders in innovation and algorithm development, UD Vision Lab specializes in object detection, recognition and tracking in wide area surveillance imagery captured by visible, infrared, thermal, and LiDAR (Light Detection and Ranging) sensors. Dr. Asari's research activities also include development of novel algorithms for 3D scene creation and visualization from 2D video streams, automatic visibility improvement of images captured in various weather conditions, human identification, human action and activity recognition, and brain signal analysis for emotion recognition and brain machine interface. Dr Asari received his BS in electronics and communication engineering from the University of Kerala, India in 1978, M Tech and PhD degrees in Electrical Engineering from the Indian Institute of Technology, Madras in 1984 and 1994 respectively. Prior to joining UD in February 2010, Dr Asari worked as Professor in Electrical and Computer Engineering at Old Dominion University, Norfolk, Virginia for 10 years. Dr Asari worked at National University of Singapore during 1996-98 and led a research team for the development of a vision-guided microrobotic endoscopy system. He also worked at Nanyang Technological University, Singapore during 1998-2000 and led the computer vision and image processing related research activities in the Center for High Performance Embedded Systems at NTU. Dr Asari holds three patents and has published more than 450 research papers, including 78 peer-reviewed journal papers in the areas of image processing, pattern recognition, machine learning and high performance embedded systems. Dr Asari has supervised 17 PhD dissertations and 32 MS theses during the last 12 years. Currently 21 graduate students are working with him in different sponsored research projects. Dr Asari is participating in several federal and private funded research projects and he has so far managed around $15M research funding. Dr. Asari received several teaching, research and advising awards. He is a Senior Member of IEEE and SPIE, and member of the IEEE Computational Intelligence Society. Dr Asari is the co-organizer of several SPIE and IEEE conferences and workshops.