In order for computer vision to mature from both scientific and industrial
points of view, we must have objective methods for evaluating algorithms.
Towards this end, the purpose of this workshop is to bring together
researchers in the computer vision community to present papers and discuss
methods for empirical evaluation of vision algorithms. The focus of the
workshop is on experimental methods. The workshop will include invited
speakers, panel discussions, and presentations of acepted papers. We hope
to foster interaction and discussion on this subject, and to learn from
related fields that have sucessfully used evaluation methods.
Click here to see post-workshop summary.
Click here to see the advance program of the workshop!
The workshop includes invited speakers and presentations of contributed
papers. In addition, editors of some of the major vision journals have
agreed to participate in a panel discussion on the theme "Expectations for papers
on empirical evaluation submitted to computer vision journals."
scope of the workshop includes but is not limited to
Contributed papers will be reviewed based on content relevant to the theme of the workshop, with an emphasis on sound empirical methods and results. Relevant topics may include, but are not limited to:
3 copies of papers should be received no later than 30 January, 1998 to:
Computer Science & Engineering
University of South Florida
4302 East Fowler Avenue, ENB 326
Tampa, FL 33620-5399
Each submission should have a cover page which includes the address, phone number, fax number, and e-mail address of the corresponding author. Papers are limited to 30 double-spaced pages (12 pt, 1 inch margins), including fiqures, references, etc. The papers will be reviewed based on content relevent to the scope of the workshop with an emphasis on empirical results. Accepted papers will be published as part of an edited book.
Click here for European Computer Vision Net Benchmarking web page.
Click here for link to Workshop on Evaluation and Validation of CV Algorithms
U of Toronto's "Delve" - "Data for Evaluating Learning in Valid Experiments"