Semantic Image Retrieval

The Objective : The proliferation of digital photos has made efficient automation of image search more challenging than ever. Current image retrieval methods directly compare images by low-level perceptual characteristics such as color and texture. However, humans understand images by describing them with concepts, such as "building," "sky," and "people." This project employs artificial intelligence to identify semantic concepts in images using probabilistic models. No manual annotation of photos is required images are found using content recognition.


In this research, the visual feature distribution of each concept was modeled with a Gaussian Mixture Model learned using the Expectation-Maximization algorithm.

The models captured the dominant colors and textures of the concepts.

During retrieval, the models were used to determine the probability of each concept occurring in the images, producing a probability vector called a semantic multinomial (SMN). Images were retrieved by finding the most similar SMNs.


This research has shown that by linking images with their underlying semantic meaning, the system understood the images and significantly improved retrieval accuracy.

It was also shown that a feature set consisting of YCbCr color values and Gabor texture filter responses had consistently better retrieval performance than Discrete Cosine Transform features.

A novel dynamic browser was also developed for exploring image collections based on semantic similarity.

It uses an animated spring graph layout algorithm to create self-organizing clusters of semantic concepts.

This provides a unique way to visualize the inherent semantic relationships in the image database, as well as to search for images by concepts.


This technology has far reaching applications beyond organizing family albums. In the medical profession, the ability to quickly correlate unknown MRI images with that of known medical disorders is now within reach. In the area of natural resource exploration for oil, gas, and coal, remotely-sensed images can now be automatically related to images of known natural reserves.

The project used artificial intelligence models to associate visual features (color and texture) with images' underlying meaning, automatically annotating them with concepts like "buildings" and "water." I also created a unique dynamic image browser.

Science Fair Project done By David C. Liu



<<Back To Topics Page...................................................................................>>Next Topic

Related Projects : Choice Based on Past Knowledge ,Computational Analysis of the Topological Property ,Global Shock ,Honey Cluster Computing ,Image Compression ,Improving Elevator Scheduling Efficiency, Is the C or Assembly Programming Language Better ,Autonomous Robotic Vehicle ,Biometrics ,Biophysical Studies of Cytotoxic Effect




Copyright © 2012 through 2016

Designed & Developed by Big Brothers