The Fourth IEEE Workshop on

Embedded Computer Vision

Saturday, June 28, 2008
Anchorage, Alaska
USA 
http://ecvw08.inf.uth.gr

In conjunction with IEEE CVPR 2008 

Main

Program

Proceedings

CVPR08

 

Call For Papers

Important Dates

Paper Submission

Workshop Program

Keynote Speaker



General Chairs :

Sek Chai

Motorola

 

Branislav Kisacanin

Texas Instruments

 

Program Chair :

 

Nikolaos Bellas

University of Thessaly

 

Web Chair :

 

Manos Koutsoubelias

University of Thessaly


Program Committee:

Moshe Ben-Ezra

Microsoft Research Asia

Shuvra Bhattacharyya

University of Maryland

Terrance Boult

University of Colorado at Colorado Springs

Sek Chai

Motorola

Goksel Dedeoglu

Texas Instruments

Antonio Gentile

University of Palermo, Italy

Richard Kleihorst

NXP Research

Kurt Konolige

SRI International

Ajay Kumar

IIT Delhi

Abelardo Lopez-Lagunas

ITESM-Toluca, Mexico

Hongying Meng

University of York

Darnell Moore

Texas Instruments

Burak Ozer

Verificon Corporation

Bernhard Rinner

Klagenfurt University, Austria

Sezai Sablak

Bosch

Mainak Sen

Cisco Systems

Vinay Sharma

Texas Instruments

Changsong Shen

University of British Columbia

Salvatore Vitabile

University of Palermo, Italy

Linda Wills

Georgia Institute of Technology

Scott Wills

Georgia Institute of Technology

Wayne Wolf

Georgia Institute of Technology

Lin Zhong

Rice University

Zoran Zivkovic

NXP

 

Call For Papers


Recent years have witnessed a dramatic increase in the use of computer vision in embedded systems. Computer vision was successfully used, for example, in mission-critical systems such as the landing of the recent rovers on Mars, and in computer-aided surgery.

Computer vision is widely used also in industrial embedded systems, taking part in production and inspection processes. Cameras find their way into everyday appliances and mobile devices such as cell phones, PDA’s, presentation appliances, and vehicles. Cameras themselves are becoming "smarter," gaining capabilities for processing the acquired images inside the camera. Furthermore, distributed smart camera systems, which integrate computer vision algorithms with embedded processing and computer networking techniques, are enabling major advances in areas such as surveillance and human identification.

Traditionally, different embedded computer vision domains, such as computer aided-surgery and surveillance were treated separately due to the different nature of the domains. However, these domains share many common problems related to their real-time, embedded-system characteristics.
 

The Embedded Computer Vision Workshop (ECVW) aims to bring together researchers working on computer vision problems that share embedded system characteristics.

Particularly, the workshop will address the following questions:
-- What are the current research problems and the applications within the computer vision domain that are specific to embedded systems --- for example, algorithms for efficient utilization of embedded processing architectures for computer vision, and conversely, architectures and design methods for effectively supporting embedded computer vision systems?
-- What are the specific issues in the embedded systems domain that are relevant to computer vision (for example, meeting constraints on real-time performance, power consumption, and memory requirements in embedded computer vision systems)?
 

Research papers are solicited in, but not limited to, the following topics:

  • Analysis of computer vision problems that are specific to embedded systems.

  • Analysis of embedded systems problems that are specific to computer vision.

  • Verification methods for mission-critical embedded computer vision systems.

  • New trends in programmable digital signal processors and their computational models.

  • Reconfigurable processors and computer vision.

  • Embedded multiprocessor systems and design methods.

  • Hybrid / distributed models and architectures for embedded computer vision.

  • Applications of embedded computer vision.

  • Development tools for computer vision applications aimed at embedded systems.
     

The proposed workshop is the fourth in its series. The first three Workshops on Embedded Computer Vision were held in conjunction with CVPR. These events were very successful. Selected papers from the first workshop are being published in a special issue of the EURASIP Journal on Embedded Systems, and we intend to pursue similar special journal issues for ECVW 2008.

Reviewing will be blind circular.
 


Important Dates

 

New Paper submission deadline 

March 21, 2008

Paper submission 

March 14, 2008

Notification to the authors 

April 18, 2008

Receipt of camera ready copy 

April 28, 2008

Workshop 

June 28, 2008

TOP

Paper Submission

The submissions to the workshop will be handled electronically. The papers must be submitted in pdf format. For review purposes, the page limit is 8 pages, including figures and text. For the camera-ready version, the limit is 2MB for the electronic version (any number of pages). The proceedings are published on DVD, so the camera-ready limit is on the file size rather than number of pages. Please use the IEEE Computer Society format, which you can access at http://vision.eecs.ucf.edu/submissions.htm

The Online Submission Page is here . Once the abstract has been submitted you will be given a paper id and a password. Please note down these two pieces of information as they will be needed in the final phase of the submission of your pdf format paper. You will get a confirmation email within 48 hours after the final phase of your submission is complete.

 

Please read this carefully for preparing your final camera ready version.



TOP

First Keynote Speaker

  • Dr. Alan Lipton, ObjectVideo                                           

Title : "Video Analytics: Taking it to the Next Level"

Abstract : This keynote presentation provides an introduction to an up-and-coming application of computer vision called video analytics. This application concerns detecting real-time human activities and behaviors in surveillance and monitoring video streams. The presentation covers the technology, its market applications, and the real market requirements that drive the technology into firmware. As an illustration, an embedded DSP-based video analytics system is presented and analyzed. Finally, the presentation considers both commercial and technological challenges for video analytics into the future.

Bio : Dr. Alan Lipton is the chief technology officer at ObjectVideo,
where he oversees a team of over 50 scientists and engineers that develop and commercialize computer vision technologies for applications such as security, surveillance and business intelligence gathering. Prior to working at ObjectVideo, Dr. Lipton was a member of the research faculty at Carnegie Mellon University 's Robotics Institute in Pittsburgh , Pa. , where he acted as a project co-manager of DARPA's VSAM project. This project developed automated, real-time algorithms that guide a network of active video sensors to monitor the activities of people and vehicles in complex scenes. He received his Ph.D. in Electrical and Computer Systems Engineering from Monash University , Melbourne , Australia in 1996. For his thesis, he studied the problem of vision-based mobile robot navigation using natural landmarks.

Second Keynote Speaker

  • Sharathchandra Pankati, IBM                                           

Title : "A Journey towards Small Secure Scanner "

Abstract : Worldwide retail checkout "shrink" (loss) is estimated to be a $22B USD business opportunity and consists of four primary fraudulent components: (i) ticket switching: not claiming the valid item at the point of sale; (ii) cash/refund fraud: cashier pocketing the revenues that belong to the retailer; (iii) fake scans (also referred to as"sweethearting"): cashier offering unauthorized discounts to the shopper, and (iv) non-scans: cashier not scanning all items at the point of sale. Our work addresses the problem of ticket switching at the checkout as an appearance based object verification problem. We argue that retail environment offers a realistic and practical general purpose vision test bed, both, in terms of the richness of the object repertoire as well as the complexity of the imaging environment. Designing camera-based object verification system is a challenging computer vision research problem because there are tens of thousands of items in a store. Moreover, there is a wide variety of different object forms, colors, shapes, and sizes that must be accounted for. Furthermore, because of a variety of different illumination conditions, learning invariant visual features of the shopping items is also very complex. We present visually augmented checkout system that is completely automatic in operation ranging from image capture, object segmentation, training/learning, and matching. Based on the real data involving thousands of shopping items collected over extended periods of time (more than 20 months), our experimental results demonstrate that visual technology is an effective (in terms of security and usability) and inexpensive component of design of next generation self checkout systems. Having proven the value of the appearance based self checkout system resistant to ticket switching, we propose a small footprint revision of this technology to be embedded within a (bioptic) laser scanner. We will elaborate on various additional challenges associated with this new design in terms of capture, segmentation, and matching. We will also present preliminary results from the new design. In summary, we conclude that the visual appearance of items is rich in information, that we can reliably extract this information, and that it is sufficiently distinctive to yield real-life practical general purpose vision system with acceptable item verification performance. Further, we will illustrate that the technology can not only be more cost effective, secure, and usable but also shows the promise of being packaged into a small form factor point of sale scanner that is ubiquitous at the retail checkout.

Bio : Sharath Pankanti obtained his PhD in the Department of Computer Science, Michigan State University in 1995. His PhD dissertation topic was methods of integrating computer vision modules. He joined Exploratory Computer Vision group at IBM T. J. Watson Research Center in 1995 as a postodoctoral fellow and later in 1996 becamse Research Staff Member. He worked on IBM Advanced Identification Project till 1999. Since then he has worked on a wide variety of the computer vision and pattern recognition systems including "footprints" - a system for tracking people based on their infrared emission.,"PeopleVision" a system for detecting and tracking individuals at multiple scales in indoor and outdoor environments, large-scale biometric indexing systems, and large scale optical object recognition systems. Since 2008, as manager of Exploratory Computer Vision Group, he leads multiple biometrics, object detection and recognition projects involving static and moving cameras. His research interests include computer vision system designs for effective safety, security, and convenience. He has published about 70 publications in peer-reviewed conference/workshop proceedings and journals and has contributed to 20 inventions spanning biometrics, object detection, and recognition. He has co-edited the first comprehensive book on biometrics, "Biometrics: Personal Identification" Kluwer, 1999 and co-authored, "A Guide to Biometrics", Springer 2004 which is being in many undergraduate and graduate biometrics curricula. He lives in Norwalk, Connecticut with his son/wife and dreads morning commute on I-95.



Invited Speaker

  • To be announced
  • Title: To be announced 
     
    Abstract : To be announced

    TOP