Ontology-Based Image Retrieval

Ontology-Based Image Retrieval

Eero Hyvnen
University of Helsinki / Helsinki Institute for Information Techonology (HIIT)
Department of Computer Science, P.O. Box 26 (Teollisuuskatu 23), FIN-00014 UNIVERSITY OF HELSINKI, FINLAND
Eero.Hyvonen@cs.Helsinki.FI
Samppa Saarela
Helsinki Institute for Information Techonology (HIIT) / University of Helsinki
Department of Computer Science, P.O. Box 26 (Teollisuuskatu 23), FIN-00014 UNIVERSITY OF HELSINKI, FINLAND
Samppa.Saarela@cs.Helsinki.FI
Avril Styrman
Helsinki Institute for Information Techonology (HIIT) / University of Helsinki
Department of Computer Science, P.O. Box 26 (Teollisuuskatu 23), FIN-00014 UNIVERSITY OF HELSINKI, FINLAND
Avril.Styrman@cs.Helsinki.FI
Kim Viljanen
Helsinki Institute for Information Techonology (HIIT) / University of Helsinki
Department of Computer Science, P.O. Box 26 (Teollisuuskatu 23), FIN-00014 UNIVERSITY OF HELSINKI, FINLAND
Kim.Viljanen@cs.Helsinki.FI

ABSTRACT

Semantic web ontology and metadata languages provide a new way to annotating and retrieving images. This poster, based on [4], considers the situation when a user is faced with an image repository whose content is complicated and semantically unknown. We show how ontologies can then be of help to the user in formulating the information need, the query, and the answers. We approach this problem through a case study based on an image collection on promotion ceremonies of the Helsinki University Museum. The actual implementation will be demonstrated at the poster session.

Keywords

semantic web, image, information retrieval, ontology

1. INTRODUCTION

A typical way to publish an image data repository is to create a keyword-based query [1] interface to an image database. Here the user may select filtering values or apply keywords to the different database fields, such as the "creator", "time", or to the content descriptions including classifications and free text documentation. More complex queries can be formulated, e.g., by using Boolean logic.

Keyword-based search methods suffer from several general limitations: A keyword in a document does not necessarily mean that the document is relevant, and relevant documents may not contain the explicit word. Synonyms lower recall rate, homonyms lower precision rate, and semantic relations such as hyponymy, meronymy, antonymy [3] are not exploited.

Keyword-based search is useful especially to a user who knows what keywords are used to index the images and therefore can easily formulate queries. This approach is problematic, however, when the user does not have a clear goal in mind, does not know what there is in the database, and what kind of semantic concepts are involved in the domain. Using the keyword-based approach would lead to the following problems:

Formulating the information need The user does not necessarily know what question to ask. One may only have general interest in the topic. How to help the user in focusing the interest within the database contents?

Formulating the query The user cannot necessarily figure out what keywords to use in formulating the search corresponding to her information need. How to help the user in formulating queries?

Formulating the answer Generating image hit lists for keywords would probably miss a most interesting aspect of the repository: the images are related to each other in many interesting ways. In our case, the ceremonial occasions follow certain patterns in place and time and the people and surroundings depicted in the images reoccur in different events. These semantical structures should somehow be exposed from the data to the audience. The goal of an ordinary museum visitor is often something quite different from trying to find certain images. The user wants to learn about the past and experience it with the help of the images.

We argue that semantic web technologies provide a promising new approach to these problems.

2. SEMANTIC ANNOTATION

The problem of creating metadata for images has been of vital importance to art and historical museums when cataloging collection items and storing them in a digital form. Following approaches are commonly used in annotating images:

Keywords Controlled vocabularies are used to describe the images in order to ease the retrieval. In Finland, for example, the Finnish web thesaurus YSA [12] is used for the task augmented with museum- and domain specific keyword lists.

Classifications There are large classification systems, such as the ICONCLASS [11, 13] and the Art and Architecture Thesaurus [7], that classify different aspects of life into hierarchical categories. An image is annotated by a set of categories that describe it. For example, if an image of a seal depicting a castle could be related to classes "seals" and "castles". The classes form a hierarchy and are associated with corresponding keywords. The hierarchy enriches the annotations. For example, since castles are a subclass of "buildings", keyword "building" is relevant when searching images with a castle.

Free text descriptions Free text descriptions of the objects in the images are used. The information retrieval system indexes the text for keyword-based search.

Semantic web ontology techniques [2] and metadata languages [5] contribute to this tradition by providing means for defining class terminologies with well-defined semantics and a flexible data model for representing metadata descriptions. One possible step to take is to use RDF Schema for defining hierarchical ontology classes and RDF ontology. The ontology together with the image metadata forms an RDF graph, a knowledge base, which can facilitate new semantic information retrieval services. In our case application, we used this approach.

3. SEMANTIC IMAGE RETRIEVAL

The ontologies form the core of our system and are used for three purposes:

Annotation terminology The ontological model provides the terminology and concepts by which metadata of the images is expressed.

View-based search The ontologies of the model, such as Events, Persons, and Places provide different views into the promotion content. Each view consists of classes and instances represented using the metaphor of a file system browser where the classes correspond to directories and instances to files. Queries are formulated by selecting resources from these views as the conjunction of the selections made. This view-based idea to information filtering along different indexing dimensions ("facets") is an adaptation of the HiBrowse system developed for a bibliographical information retrieval system [8].

Semantic browsing After finding a focus of interest, an image, the semantic ontology model together with image instance data can be used in finding out relations between the selected image and other images in the repository. These images are recommended to the user. They do not necessarily match the filtering query but that are likely to be of interest. They can for example, contain relatives of the Person in the photo.

By clicking on a recommended thumbnail photo, the large image in view is switched and a new set of recommended images is dynamically generated beneath it. This idea is vaguely related to topic-based navigation used in Topic Maps [5, 6] and the book recommendation facility in use at Amazon.com. Figure 1 illustrates the user interface of the system.

User interface to the image server.
Figure 1: User interface to the image server.

4. DISCUSSION

Our work showed that ontologies can be used not only for annotation and precise information retrieval [9, 10], but also for helping the user in formulating the information need and the corresponding query. This is important in applications such as the promotion exhibition, where the domain semantics are complicated and not necessarily known to the user. Furthermore, the ontology-enriched knowledge base of image metadata can be applied to constructing more meaningful answers to queries than just hit-lists. For example, in our implementation, the underlying knowledge base provided the user with a semantic browsing facility between related recommended images.

The major difficulty in the ontology based approach is the extra work needed in creating the ontology and the detailed annotations. We believe, however, that in many applications - such as in our case problem - this price is justified due to the better accuracy obtained in information retrieval and to the new semantic browsing facilities offered to the end-user. The trade-off between annotation work and quality of information retrieval can be balanced by using less detailed ontologies and annotations, if needed.

5. ACKNOWLEDGEMENTS

Thanks to Tero Halonen, Kati Heinmies, Robert Holmberg, Kim Josefsson, Pasi Lehtimki, Tiina Metso, Eetu Mkel, Matti Nyknen, Taneli Rantala, and Jaana Tegelberg. Our work was partly funded by the National Technology Agency Tekes, Nokia, TietoEnator, the Espoo City Museum, and the Foundation of the Helsinki University Museum, and was supported by the National Board of Antiquities.

6. REFERENCES

  1. M. Agosti and A. Smeaton, editors. Information retrieval and hypertext. Kluwer, New York, 1996.
  2. D. Fensel (ed.). The semantic web and its languages. IEEE Intelligence Systems, Nov/Dec 2000.
  3. C. Fellbaum, editor. WordNet. An electronic lexical database. The MIT Press, Cambridge, Massachusetts, 2001.
  4. E. Hyvnen, A. Styrman, and S. Saarela. Ontology-based image retrieval. Number 2002-03 in HIIT Publications, pages 15-27. Helsinki Institute for Information Technology (HIIT), Helsinki, Finland, 2002. http://www.hiit.fi (31.3.2003).
  5. Eero Hyvnen, Petteri Harjula, and Kim Viljanen. Representing metadata about web resources. In E. Hyvnen, editor, Semantic Web Kick-Off in Finland, number 2002-01 in HIIT Publications. Helsinki Institute for Information Technology (HIIT), May 2002. http://www.cs.helsinki.fi/u/eahyvone/stes/semanticweb/ (31.3.2003).
  6. Steve Pepper. The TAO of Topic Maps. In Proceedings of XML Europe 2000, Paris, France, 2000. http://www.ontopia.net/topicmaps/materials/rdf.html (31.3.2003).
  7. T. Peterson. Introduction to the Art and Architecture Thesaurus. Oxford University Press, 1994.
  8. A. S. Pollitt. The key role of classification and indexing in view-based searching. Technical report, University of Huddersfield, UK, 1998. http://www.ifla.org/IV/ifla63/63polst.pdf (31.3.2003).
  9. A. T. Schreiber, B. Dubbeldam, J. Wielemaker, and B. J. Wielinga. Ontology-based photo annotation. IEEE Intelligent Systems, 16:66-74, May/June 2001.
  10. G. Schreiber, I. Blok, D. Carlier, W. van Gent, J. Hokstam, and U. Roos. A mini-experiment in semantic annotation. In I. Horrocks and J. Hendler, editors, The Semantic Web - ISWC 2002. First international semantic web conference, number LNCS 2342, pages 404-408. Springer-Verlag, Berlin, 2002.
  11. J. van den Berg. Subject retrieval in pictorial information systems. In Proceedings of the 18th international congress of historical sciences, Montreal, Canada, pages 21-29, 1995. http://www.iconclass.nl/texts/history05.htm (31.3.2003).
  12. http://vesa.lib.helsinki.fi (31.3.2003).
  13. http://www.iconclass.nl (31.3.2003).