StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

How Visual-Based Search Can Be Obtained Using Matlab Programme - Assignment Example

Cite this document
Summary
"How Visual-Based Search Can Be Obtained Using Matlab Programme" paper outlines the stepwise development of an improved Matlab code that searches through a base of images and matches the images in the order of similarity starting with the most similar to the least similar…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER98.9% of users find it useful

Extract of sample "How Visual-Based Search Can Be Obtained Using Matlab Programme"

Image Based Search Engine Using Matlab Name: Course: Lecturer: Date: Abstract This lab experiment seeks to illustrate how visual-based search can be obtained using matlab programme. It outlines the stepwise development of an improved matlab code that searches through a base of images and matches the images in the order of similarity starting with the most similar to the least similar. The lab report concludes on the likelihood of reliability of matlab in this application and shows that sites and situations that require this kind of search can rely on malab for development of such search engine. Introduction Digital image collections are usually searched using text-based querying such as those used in google. Text is only good at describing objects present in the image. However, its description of the visual appearance of the image is poorer compared to if an image were used. This report presents a design of image-based search engine and display. The search engine can be incorporated in any system that requires matching of images. One example is the image matching in forensic systems (Fourandsix.com, 2016) or for common browsing in a system such as online shopping online (Bustos, 2016). The experiment uses matlab for the coding. This is because matlab is an efficient system for pattern and color recognition which form the basis of image searching (Theodoridis and Koutroumbas, 2006). The main objective of this experiment is to write the program that visually searches an image collection. The system is designed so as to accept an image in the search query and return a list of images in the collection ranked in accordance with their similarity to the query image. The system displays only the top 20 images. Below is an illustration of the system design. Figure 1: An illustration of the image-based search engine as the expected output of this experimental design. Procedure The base code was downloaded from the ULearn. A dataset of images was downloaded from the SurreyLearn. Both the zipped files were unzipped in separate folders in the folder called visiondemo. Using the command prompt, it was checked to ensure that there was enough space on the directory. There were 591 images. When the image dataset was unzipped, MSRC_ObjCategImageDatabase_v2 was found within the space. The image descriptors were extracted from the dataset. There was a program cvpr_computerdescriptors.m from the skeleton code. This program iterates through all of the 591 images in the dataset, then extracts an image descriptor for each image. Each of the descriptors will be stored in separate file, within a folder specified in the code as shown below. DATASET_FOLDER = 'c:/visiondemo/mydemo/MSRC_ObjCategImageDatabase_v2'; And OUT_FOLDER = 'c:/visiondemo/mydemo/descriptors'; These two paths specify the input and output folders respectively. A folder was then created to store the descriptors extracted by the program. A subfolder globalRGBhisto was then created within the descriptors folder. The program was then run to check that the output was generated. The program cvpr_computeddescriptors calls the function extractrandom.m with each image in turn to compute the descriptor. This is expected to take some minutes. As supplied, the function returns random numbers instead of real image descriptor. This function is what is being changed in the next steps to return a more useful thing such as a descriptor. Running a visual search Following the procedure of Smith (2006), the program cvpr_visualsearch uses the extracted image descriptors to run the visual search. As with the descriptor extraction program, it was configured before use. The following strings were replaced. DATASET_FOLDER = 'c:/visiondemo/mydemo/MSRC_ObjCategImageDatabase_v2'; Was replaced with the line line used for dataset_folder in cvpr_computedescriptor. DESCRIPTOR_FOLDER = 'c:/visiondemo/mydemo/descriptors'; was replaced with the line used for output_folder in cvpr_computedescriptors. The program was then run and picked an image at random and loaded its descriptor as query. Codewise, this program queries descriptor to each of the 591 image descriptors previously extracted. The comparison was performed by calling cvpr_compare. The cvpr_compare.m was edited. It takes 2 descriptors as input and creates distance between them. The current code gets the distance as random. It is improved in the next step and later improved using the Euclidean distance as described by Marques (2011). Modifying the distance measure The cvpr_compare.m is modified by computing the Euclidean distance between the two feature descriptors F1 and F2. Each element of feature F1 was subtracted from feature F2 in matlab. x=F1-F2; The differences were then squared X= x.^2; The square differences were then summed up: X=sum(x); And then the square root taken dst= sqrt (x); The visual search program was then re-run. The left-hand image on the output corresponds to the most relevant image. This is the query image (with a descriptor that matches the query perfectly i.e with distance zero). However, the results are still random because the image descriptor consist of random numbers and thus the process was run severally for reliable conclusions. The descriptor computation gives better results and improves on the relationship between searched images (Academia.edu, 2016) was then modified as shown in below. Modification of the descriptor The file edited was extractRandom.m which is the function responsible for computing an image descriptor from an image. It was modified to compute a 3-dimensional image descriptor which comprise the average red, green and blue values of the image since this gives a better color representation (Eghbalnia, 2000). The average red value was computed using:- red=img(: , : ,1); red=reshape (red, 1, []); Average_red = mean (red); Average green value green=img(: , : ,1); green=reshape (green, 1, []); average_green = mean (green); Average blue value blue=img(: , : ,1); blue=reshape (blue, 1, []); average_blue = mean (blue); The three values were then concatenated to form a feature vector F:- as shown below F= [average_red average_green average_blue]; The descriptor extraction process and the visual search were re-run and results noted. The search was re-run several more times to see the behaviour of the random selection. Improved as the results were, the results were still random for each image search. Improving this can be achieved by modifying the descriptor to a global color histogram (Kim and Chung, 2003). Modification of the descriptor to a global colour histogram To make the results even better, a global colour histogram was used. The code was then embedded in the extractRandom.m. The histogram was computed in matlab as shown below. function H=ComputeRGBHistogram(img,Q); % INPUT: img, an RGB image where pixels have RGB values in range 0-255 % INPUT: Q, the level of quantization of the RGB space e.g. 4 % First, create qimg, an image where RGB are normalised in range 0 to (Q-1) % This is done by dividing each pixel value by 256 (to give range 0 - just % under 1) and then multiply this by Q, then drop the decimal point. qimg=double(img)./256; qimg=floor(qimg.*Q); % Then, a single integer value for each pixel that summarises the % RGB value is created. We will use this as the bin index in the histogram. bin = qimg(:,:,1)*Q^2 + qimg(:,:,2)*Q^1 + qimg(:,:,3); % 'bin' is a 2D image where each 'pixel' contains an integer value in % range 0 to Q^3-1 inclusive. % We will now use Matlab's hist command to build a frequency histogram % from these values. First, we have to reshape the 2D matrix into a long % vector of values. vals=reshape(bin,1,size(bin,1)*size(bin,2)); % Then, we can use hist to create a histogram of Q^3 bins. H = hist(vals,Q^3); % It is convenient to normalise the histogram, so the area under it sum % to 1. H = H ./sum(H); Results and discussion When run, the extractrandom.m returned random numbers as illustrated below. Modification of this file would give a real image descriptor. The computation of the descriptors took some time close to 1 and half minutes. The results below is just a section of the process results. Elapsed time is 0.023907 seconds. Processing file 584/589 - 9_4_s.bmp Elapsed time is 0.025109 seconds. Processing file 585/589 - 9_5_s.bmp Elapsed time is 0.025196 seconds. Processing file 586/589 - 9_6_s.bmp Elapsed time is 0.024337 seconds. Processing file 587/589 - 9_7_s.bmp Elapsed time is 0.024309 seconds. Processing file 588/589 - 9_8_s.bmp Elapsed time is 0.024417 seconds. Processing file 589/589 - 9_9_s.bmp Running a visual search The program picked an image at random and loaded its descriptor as query. In terms of code, this program queries descriptor to each of the 591 image descriptors as illustrated below. Figure1a: first run Figure b: second run Figure c: third run Figure d: fourth run It took some time of running through the images before displaying them. Modifying the distance measure Having modified the Euclidean distance, the re-run of the visual search program gave the left-hand image on the output that perfectly matched the query. This means that this it had distance zero. However, the results are still random since the descriptor consists of random numbers. Below is an illustration of the modified distance measure before modification of the descriptor. Figure2a: second run Figure2b: second run Figure2c: third run Figure2d: fourth run Modification of the descriptor computation A modification of the descriptor computation to the 3-D image descriptor returned images with roughly the same overall colour. This was particularly evident in the images with large expanses of green grass. Running the descriptors results in of the dimensions 213x320x3. Since the query image was selected at random, the several re-run of the search gave the results as illustrated below. Figure3a: first run Figure3b: second run Figure3c: third run Figure3d: fourth run With this modification, the system returns images with roughly the same color in the overall. This is more profound in images containing large expanses of green grass. This process shows improved discrimination between global colour. However, they are not good enough and can still be improved using the histogram. Modification of the descriptor to a global colour histogram The Histogram process resulted in improved image-based search system results as can be seen in the results below. Figure4a: first run Figure4b: second run Figure4c: third run Figure4d: fourth run The search results show a greater improvement in the search results by the RGB color histogram. Challenges of visual search and the semantic gap Throughout the experimentation, the study noted that the search took a lot of time. This means that image-based search can be very slow compared to the text search. Conclusion From this experiment, it can be concluded that matlab can be used for effective image-based searching and can therefore be very useful in shopping sites where image searching gives better results compared to text-based search. It can as well be useful in image mapping in forensics. The code used to arrive at this conclusion is as shown in the appendix section below. The slowness of the image-based search can make the process insufficient. References Academia.edu. (2016). Visual Image Search: Feature Signatures or/and Global Descriptors. [online] Available at: http://www.academia.edu/2514673/Visual_Image_Search_Feature_Signatures_or_and_Global_Descriptors [Accessed 16 Apr. 2016]. Bustos, L. (2016). Ecommerce Trend: Keep Your Eyes on Visual Search. [online] Getelastic.com. Available at: http://www.getelastic.com/ecommerce-trend-keep-your-eyes-on-visual-search/ [Accessed 16 Apr. 2016]. Eghbalnia, H. (2000). A complex-valued overcomplete representation of information for visual search. Fourandsix.com. (2016). Image Authentication and Forensics | Fourandsix Technologies - Blog - Image Search for Image Forensics. [online] Available at: http://www.fourandsix.com/blog/2012/8/6/image-search-for-image-forensics.html [Accessed 16 Apr. 2016]. Kim, C. and Chung, C. (2003). A multi-step approach for partial similarity search in large image data using histogram intersection. Information and Software Technology, 45(4), pp.203-215. Lee, R. and Kim, H. (2008). Computer and Information Science. Berlin: Springer-Verlag. Marques, O. (2011). Practical image and video processing using MATLAB. Hoboken, NJ: Wiley-IEEE Press. Smith, S. (2006). MATLAB. Indianapolis, IN: Dog Ear Pub. Theodoridis, S. and Koutroumbas, K. (2006). Pattern Recognition. Elsevier Science Limited. Appendix1 Final descriptor code %% EEE3032 - Computer Vision and Pattern Recognition (ee3.cvpr) %% %% cvpr_computedescriptors.m %% Skeleton code provided as part of the coursework assessment %% This code will iterate through every image in the MSRCv2 dataset %% and call a function 'extractRandom' to extract a descriptor from the %% image. Currently that function returns just a random vector so should %% be changed as part of the coursework exercise. %% %% (c) John Collomosse 2010 (J.Collomosse@surrey.ac.uk) %% Centre for Vision Speech and Signal Processing (CVSSP) %% University of Surrey, United Kingdom close all; clear all; %% Edit the following line to the folder you unzipped the MSRCv2 dataset to DATASET_FOLDER = 'c:/visiondemo/mydemo/MSRC_ObjCategImageDatabase_v2'; %% Create a folder to hold the results... OUT_FOLDER = 'c:/visiondemo/mydemo/descriptors'; %% and within that folder, create another folder to hold these descriptors %% the idea is all your descriptors are in individual folders - within %% the folder specified as 'OUT_FOLDER'. OUT_SUBFOLDER='globalRGBhisto'; allfiles=dir (fullfile([DATASET_FOLDER,'/images/*.bmp'])); for filenum=1:length(allfiles) fname=allfiles(filenum).name; fprintf('Processing file %d/%d - %s\n',filenum,length(allfiles),fname); tic; imgfname_full=([DATASET_FOLDER,'/Images/',fname]); img=double(imread(imgfname_full))./255; fout=[OUT_FOLDER,'/',OUT_SUBFOLDER,'/',fname(1:end-4),'.mat'];%replace .bmp with .mat F=extractRandom(img); save(fout,'F'); toc end Compare.m code function dst=cvpr_compare(F1, F2) % This function should compare F1 to F2 - i.e. compute the distance % between the two descriptors % For now it just returns a random number dst=rand(); return; X= F1-F2; X=X.^2; X= sum (X); dst=sqrt(X); Visualseach.m code %% EEE3032 - Computer Vision and Pattern Recognition (ee3.cvpr) %% %% cvpr_visualsearch.m %% Skeleton code provided as part of the coursework assessment %% %% This code will load in all descriptors pre-computed (by the %% function cvpr_computedescriptors) from the images in the MSRCv2 dataset. %% %% It will pick a descriptor at random and compare all other descriptors to %% it - by calling cvpr_compare. In doing so it will rank the images by %% similarity to the randomly picked descriptor. Note that initially the %% function cvpr_compare returns a random number - you need to code it %% so that it returns the Euclidean distance or some other distance metric %% between the two descriptors it is passed. %% %% (c) John Collomosse 2010 (J.Collomosse@surrey.ac.uk) %% Centre for Vision Speech and Signal Processing (CVSSP) %% University of Surrey, United Kingdom close all; clear all; %% Edit the following line to the folder you unzipped the MSRCv2 dataset to DATASET_FOLDER = 'c:/visiondemo/mydemo/MSRC_ObjCategImageDatabase_v2'; %% Folder that holds the results... DESCRIPTOR_FOLDER = 'c:/visiondemo/mydemo/descriptors'; %% and within that folder, another folder to hold the descriptors %% we are interested in working with DESCRIPTOR_SUBFOLDER='globalRGBhisto'; %% 1) Load all the descriptors into "ALLFEAT" %% each row of ALLFEAT is a descriptor (is an image) ALLFEAT=[]; ALLFILES=cell(1,0); ctr=1; allfiles=dir (fullfile([DATASET_FOLDER,'/Images/*.bmp'])); for filenum=1:length(allfiles) fname=allfiles(filenum).name; imgfname_full=([DATASET_FOLDER,'/Images/',fname]); img=double(imread(imgfname_full))./255; thesefeat=[]; featfile=[DESCRIPTOR_FOLDER,'/',DESCRIPTOR_SUBFOLDER,'/',fname(1:end-4),'.mat'];%replace .bmp with .mat load(featfile,'F'); ALLFILES{ctr}=imgfname_full; ALLFEAT=[ALLFEAT ; F]; ctr=ctr+1; end %% 2) Pick an image at random to be the query NIMG=size(ALLFEAT,1); % number of images in collection queryimg=floor(rand()*NIMG); % index of a random image %% 3) Compute the distance of image to the query dst=[]; for i=1:NIMG candidate=ALLFEAT(i,:); query=ALLFEAT(queryimg,:); thedst=cvpr_compare(query,candidate); dst=[dst ; [thedst i]] end dst=sortrows(dst,1); % sort the results %% 4) Visualise the results %% These may be a little hard to see using imgshow %% If you have access, try using imshow(outdisplay) or imagesc(outdisplay) SHOW=20; % Show top 15 results dst=dst(1:SHOW,:); outdisplay=[]; for i=1:size(dst,1) img=imread(ALLFILES{dst(i,2)}); img=img(1:2:end,1:2:end,:); % make image a quarter size img=img(1:81,:,:); % crop image to uniform size vertically (some MSVC images are different heights) outdisplay=[outdisplay img]; end imshow(outdisplay); axis off; Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Computer Vision Example | Topics and Well Written Essays - 2486 words, n.d.)
Computer Vision Example | Topics and Well Written Essays - 2486 words. https://studentshare.org/logic-programming/2054629-computer-vision
(Computer Vision Example | Topics and Well Written Essays - 2486 Words)
Computer Vision Example | Topics and Well Written Essays - 2486 Words. https://studentshare.org/logic-programming/2054629-computer-vision.
“Computer Vision Example | Topics and Well Written Essays - 2486 Words”. https://studentshare.org/logic-programming/2054629-computer-vision.
  • Cited: 0 times

CHECK THESE SAMPLES OF How Visual-Based Search Can Be Obtained Using Matlab Programme

Design of a Bandpass Fir Digital Filter

For this work, XILINX and matlab software was used for the design.... using FPGA hardware SPARTAN-3E kit.... These filters designed using the software were tested by passing a sinusoidal test signal of 5 Hz along with noise and the filtered ... bull; To design FIR bandpass that can help to extract the low frequency of between 0....
38 Pages (9500 words) Dissertation

Active shape modelling compared to hip morphometry in the prediction of hip fracture

This project aims to examine the relationship between the shape of the hip joint and total hip replacement in osteoarthritis,a severe illness with significant psychological and social repercussions.... A is the most common form of arthritis and it affects the connective tissues around the joints, particularly bone and cartilage....
27 Pages (6750 words) Thesis

Active Shape Modelling compared to Hip Morphometry in the prediction of Hip fracture

It is a highly traumatic sometimes fatal event, especially for the elderly.... People with osteoarthritis (OA) are at a higher risk of hip fracture due to.... ... ... OA is a non-typical inflammatory disease, the most common form of arthritis, effecting elderly people (above 60yrs) and its prevalence increases It involves pain, stiffness and and swelling and most commonly effects knee, hips and hand (Lawrence et al....
11 Pages (2750 words) Thesis

Entrepreneurship in Equatorial Guinea

The paper "Entrepreneurship in Equatorial Guinea" discusses that many foreign entrepreneurs are hindered by the country's stringent labour laws with analysts terming it as even more demanding than the 20 step starting new business procedure requirements.... ... ... ... The phenomenon of overreliance on a singular natural resource known as the Dutch disease or the 'rich resource curse' is replicated in our case study topic, Equatorial Guinea....
52 Pages (13000 words) Essay

Re-Educating Healthcare Providers on Hand Hygiene Practice

Reportedly, the fact that patients can suffer from a broad range of infection while receiving treatment in hospitals challenges the essence of healthcare facilities in the country.... The present paper deals with the necessity of re-education on hand hygiene practice.... ... ... ...
22 Pages (5500 words) Research Paper

Decision-Making Tools For Health Care Professionals

The paper "Decision-Making Tools For Health Care Professionals" examines the new approach in using modern information technologies to help improve the survival odds of cancer patients.... The medical world is not using computer technology as much as it should and it is high time it considers using this important resource in the fight against one of mans most intractable illnesses....
25 Pages (6250 words) Research Paper

The Use of Technology for Children with Down Syndrome in Saudi Arabia

This can result in both physical and mental challenges for the baby (CDC, 2013).... The paper 'The Use of Technology for Children with Down Syndrome in Saudi Arabia' is an exciting example of a research proposal on sociology.... Down syndrome is defined as a genetic disorder that is caused by chromosomal abnormalities....
40 Pages (10000 words) Research Proposal

Technique Used for Distance Measurement of Hexacopter in Gold Mining

The report explores other techniques that can be used to overcome the situation and help the miners complete the assignment successfully.... This is because it has been proven that it can withstand most of the barriers such a location may impose.... To begin with, geologists know more about how gold structures....
21 Pages (5250 words) Research Paper
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us