This post is also available in / Esta entrada está disponible también en: Spanish (Español) .
Whenever a user needs to find a picture, he or she depends on the words that have been associated with the selected image. Services like Google’s Image Labeler even asks users to index images with words to relate to, but this step may no longer be neccessary in the near future. Dr. Naveen Agnihotri will show a method to enable computers to ge to achieve this through an Automated Image Recognition. It seems the next generation of online searching for images and videos is almost here.
Cada vez que un usuario necesita encontrar una imagén, él o ella depende de las palabras que han sido asociadas con la imágen seleccionada. Servicios como el Image Labeler de Google les pide a sus usuarios que nombren las imágenes, pero este paso ya quizás no sea necesario en el futuro próximo. El Dr. Naveen Agnihotri demostrará un método para permitir que las computadoras logren realizar esto por medio de un reconocimiento automatizado de imágenes. Parece que la próxima generación de búsqueda de imágenes y videos en línea está por llegar.
Neuroscientist to Take People Under the Hood of Image Search at Infonortics Search Meeting in Boston April 27-28
Dr. Naveen Agnihotri to Demonstrate Computer Image Recognition Method That Mimics the Human Brain
Boston, MA (PRWEB) April 24, 2009 — Dr. Naveen Agnihotri, co-founder and chief technology officer of Milabra, will present a method that enables computers to identify and understand images as people do at the Infonortics Search Engine Meeting April 27-28 in Boston.
The method, known as parts-based representation, promises to significantly improve image indexing and search software, which typically relies on more basic techniques such as indexing text labels that people add to images or comparing images pixel by pixel to known images.
Automated image recognition is at the foundation of the next generation of search and web applications. Images – both still and video – comprise the fastest-growing type of content on the web. When images on the web can be classified and indexed as readily as text is today, the foundation is in place for the kind of explosive creativity that the text-based web fostered with Web 2.0 applications. Dr. Agnihotri calls this next generation of image-based search and applications the “visual web”.
“Visual-media classification and indexing is what provides developers of Visual Web applications with the base for innovating a whole new generation of Internet applications,” says Dr. Agnihotri.
Using the parts-based representation method, developers train software to recognize features common to an entire class of images. An example is training the software to recognize noses, which are common to faces. Another example is training the software to recognize sand, which is common to beaches. Once trained, these software-based classifiers can learn to recognize any images that have these parts much as humans do. For instance, the software can understand that a photo or video depicts a beach with people on it even if they’ve never seen that particular beach or the people before.
The advantages of parts-based representation over common image search techniques include:
* scalable image processing
* greater accuracy in identifying classes of images
* lower processing costs
* better indexing to enable application development.
A neuroscientist and a computer scientist, Dr. Agnihotri applies his neural network research to Milabra’s software development. He has a M.S. in biological engineering from the University of Georgia and a Ph.D. in neuroscience from Columbia University, where he worked with Nobel laureate Eric Kandel on brain network processes. He conducted a postdoctoral fellowship in computational neuroscience at MIT and taught neuroscience at Columbia University.
Dr. Agnihotri’s presentation is part of a broader panel discussion: “Non-Text Search Technologies: Speech, Images, Video” on Monday, April 27 at 2 p.m. The panel is chaired by Sue Feldman, IDC’s vice president for search and discovery technologies. Additional panelists include Tom Wilde, chief executive officer of EveryZing, and Michael Phillips, co-founder and chief technology officer of vlingo.
The Infonortics Search Engine Meeting, currently in its 14th year, is an in-depth exploration of search and content processing.
# # #