Monday, 21 December 2015

AD rotator xmlfile in asp.net

<?xml version="1.0" encoding="utf-8" ?>
<Advertisements>
    <Ad>
        <ImageUrl>Desert.jpg</ImageUrl>
        <NavigateUrl>http://www.facebook.com</NavigateUrl>
        <AlternateText> Facebook logo</AlternateText>
        <Impressions>20</Impressions>
        <Keyword>social networking sites</Keyword>
    </Ad>
    <Ad>
        <ImageUrl>Tulips.jpg</ImageUrl>
        <NavigateUrl>http://enthusiaststudent.blogspot.in</NavigateUrl>
        <AlternateText>Technoology</AlternateText>
        <Impressions>25</Impressions>
        <Keyword>Image processing</Keyword>
    </Ad>
</Advertisements>

Sunday, 13 December 2015

Image Search using Guide in MATLAB

Inorder to run the below code , place the images of actors in actors folder,
images of actress in actress folder,images of dogs in dogs folder and place all these folders on your computer desktop

Type Guide in matlab and create a GUI as shown below and save it on desktop



Run it,
You will get below code in green automatically generated
Copy and paste the code in black to the automatically generated code.
run the code in MATLAB
 function varargout = untitled(varargin)
% UNTITLED MATLAB code for untitled.fig
%      UNTITLED, by itself, creates a new UNTITLED or raises the existing
%      singleton*.
%
%      H = UNTITLED returns the handle to a new UNTITLED or the handle to
%      the existing singleton*.
%
%      UNTITLED('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in UNTITLED.M with the given input arguments.
%
%      UNTITLED('Property','Value',...) creates a new UNTITLED or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before untitled_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to untitled_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help untitled

% Last Modified by GUIDE v2.5 13-Dec-2015 00:03:51

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @untitled_OpeningFcn, ...
                   'gui_OutputFcn',  @untitled_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before untitled is made visible.
function untitled_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to untitled (see VARARGIN)

% Choose default command line output for untitled
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes untitled wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = untitled_OutputFcn(hObject, eventdata, handles)
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;



function edit1_Callback(hObject, eventdata, handles)
% hObject    handle to edit1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit1 as text
%        str2double(get(hObject,'String')) returns contents of edit1 as a double


% --- Executes during object creation, after setting all properties.
function edit1_CreateFcn(hObject, eventdata, handles)
% hObject    handle to edit1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.
%       See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
    set(hObject,'BackgroundColor','white');
end


% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject    handle to pushbutton1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

first=get(handles.edit1,'string');
foldername=strcat('C:\Users\s\Desktop\',first,'\'); % type the pathname where the folders are located
imname=strcat(foldername,'\*.png');%images are in .png format if u r images are in other formats change this
n = length(imname);   
imgs=dir(imname);
filename=strcat(foldername,'\',imgs(1).name);
Cval = imread(filename);
handles.C = Cval;
axes(handles.axes1);
imshow(Cval);
handles.output = hObject;
guidata(hObject, handles);

filename2=strcat(foldername,'\',imgs(2).name);
Cval = imread(filename2);
handles.C = Cval;
axes(handles.axes2);
imshow(Cval);
handles.output = hObject;
guidata(hObject, handles);

=====end of code======================


Now type actress in the edit box and click on Search image button
You will get output as shown below


Saturday, 28 November 2015

Drowsy Detection Using LBP, OpenCV and python

Prequisites to learn and run the below code

I started learning pyton and opencv

These are the list of urls i followed to learn
http://docs.opencv.org/trunk/doc/py_tutorials/py_tutorials.html
https://www.youtube.com/playlist?list=PLA175E8A1816CD64B
I experimented and learnt many things about how to install using
pip
easy_install
Later i saw the below urls

http://scikit-learn.org/stable/modules/svm.html
http://scikit-image.org/docs/dev/auto_examples/plot_hog.html

and wrote the below code by using the code in the above urls

 import cv2
from skimage.feature import local_binary_pattern
import numpy as np
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('rightEye.xml')
nose_cascade = cv2.CascadeClassifier('haarcascade_mcs_nose.xml')

METHOD = 'uniform'

radius = 2
n_points = 8 * radius

def hist(ax, lbp):
    n_bins = lbp.max() + 1
    return ax.hist(lbp.ravel(), normed=True, bins=n_bins, range=(0, n_bins),
                   facecolor='0.5')
def kullback_leibler_divergence(p, q):
    p = np.asarray(p)
    q = np.asarray(q)
    filt = np.logical_and(p != 0, q != 0)
    return np.sum(p[filt] * np.log2(p[filt] / q[filt]))


def match(refs, img):
    best_score = 10
    best_name = None
    lbp = local_binary_pattern(img, n_points, radius, METHOD)
    n_bins = lbp.max() + 1
    hist, _ = np.histogram(lbp, normed=True, bins=n_bins, range=(0, n_bins))
    for name, ref in refs.items():
        ref_hist, _ = np.histogram(ref, normed=True, bins=n_bins,
                                   range=(0, n_bins))
        score = kullback_leibler_divergence(hist, ref_hist)
        if score < best_score:
            best_score = score
            best_name = name
    return best_name


brick = cv2.imread('eclosed.jpg')
brick= cv2.cvtColor(brick, cv2.COLOR_BGR2GRAY)
brick= cv2.resize(brick, (21,21),cv2.INTER_AREA)

grass = cv2.imread('eopen1.jpg')
grass = cv2.cvtColor(grass, cv2.COLOR_BGR2GRAY)
grass= cv2.resize(grass, (21,21),cv2.INTER_AREA)
wall = cv2.imread('eye2.jpg')
wall=cv2.cvtColor(wall, cv2.COLOR_BGR2GRAY)
wall= cv2.resize(wall, (21,21),cv2.INTER_AREA)

refs = {
    'brick': local_binary_pattern(brick, n_points, radius, METHOD),
    'grass': local_binary_pattern(grass, n_points, radius, METHOD),
    'wall': local_binary_pattern(wall, n_points, radius, METHOD)
}
cap = cv2.VideoCapture('sample1.avi')


if __name__ == "__main__":  
    while(cap.isOpened()):
        ret, img = cap.read()
        #img = cv2.imread('closed.png')
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        faces = face_cascade.detectMultiScale(gray, 1.3, 5)
        for (x,y,w,h) in faces:
            img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
            roi_gray = gray[y:y+h, x:x+w]
            roi_color = img[y:y+h, x:x+w]
            eyes = eye_cascade.detectMultiScale(roi_gray)
            for (ex,ey,ew,eh) in eyes:
                cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
            #le = roi_color[eyes[0,1]:eyes[0,1]+eyes[0,3], eyes[0,0]:eyes[0,0]+eyes[0,2]]
            re = roi_color[eyes[1,1]:eyes[1,1]+eyes[1,3], eyes[1,0]:eyes[1,0]+eyes[1,2]]
            #le=cv2.cvtColor(le, cv2.COLOR_BGR2GRAY)
            re=cv2.cvtColor(re, cv2.COLOR_BGR2GRAY)
            re= cv2.resize(re, (21,21),cv2.INTER_AREA)
            #lstate=  match(refs, le)
            rstate=  match(refs, re)
            if rstate == 'brick':
  
                cv2.putText(img,'closed',(10,90),cv2.FONT_HERSHEY_SIMPLEX,1,(255,0,255),2,cv2.LINE_AA)
##            elif lstate == 'brick' or rstate == 'brick':
##                cv2.putText(img,'closed/open',(900,900),cv2.FONT_HERSHEY_SIMPLEX,1,(255,0,255),2,cv2.LINE_AA)
            else:
                cv2.putText(img,'open',(10,90),cv2.FONT_HERSHEY_SIMPLEX,1,(255,0,255),2,cv2.LINE_AA)
            cv2.imshow('frame',img)
            k= cv2.waitKey(1)
            if k == 32:#when u press spacebar
                cap.release()
                cv2.destroyAllWindows()

 Output:



Algorithm used:

Extracting Eye Module
Input: Video /Camera
Method:
For each frame do
      1. Detect the faces using Viola & Jones face detection algorithm
      2. Crop the face
      3. Detect the eyes in the cropped face using Viola & Jones eye detection               algorithm
            for each eye_bounding_box returned
                  find the pair of eye_bounding_boxes that are approximately equal in                 position in y-direction
      4. Crop the right eye
Output: eye_image

Eye State Detection Module
Input:
      open and closed eye training images and test eye image
Method:
      1. Measure a collection of LBPs for each eye image in training and test data
      2.Using  the histogram (equal-width bins)of LBP collections of each eye image in       training and test images, calculate the value of the normalized probability   density function at the bin.
      3. Calculate the distance between each training image's probability distribution       with the probability distributions of test image's  using Kullback-Leibler       Divergence.
      4.Output the one with least distance.
Output:
      State of the eye whether ‘closed’ or ‘open’

Face and Facial Feature Detection Using OpenCV and python

The below program detects eyes when closed also. I used right eye haar classifier from http://alereimondo.no-ip.org/OpenCV/34

In every frame of the video, it finds nose, mouth and eyes. No tracking algorithm  used.
Requirements to run the below code
opencv 3
python 2.7
numpy 1.91

import cv2

cap = cv2.VideoCapture('seq_bruges04_300frames.avi')
ret, img = cap.read()
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#righteye_cascade = cv2.CascadeClassifier('haarcascade_righteye_2splits.xml')
#lefteye_cascade = cv2.CascadeClassifier('haarcascade_lefteye_2splits.xml')
eye_cascade = cv2.CascadeClassifier('rightEye.xml')
mouth_cascade = cv2.CascadeClassifier('haarcascade_mcs_mouth.xml')
nose_cascade = cv2.CascadeClassifier('haarcascade_mcs_nose.xml')

while(cap.isOpened()):
    ret, img = cap.read()

    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    for (x,y,w,h) in faces:
        img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = img[y:y+h, x:x+w]
        eyes = eye_cascade.detectMultiScale(roi_gray)
        i = 0
        while i < len(eyes)-1:
            ex1,ey1,ew1,eh1 = eyes[i]
            ex2,ey2,ew2,eh2 = eyes[i+1]
            if abs(ex1-ex2) > 20 and abs(ey1-ey2)<10:
                cv2.rectangle(roi_color,(ex1,ey1),(ex1+ew1,ey1+eh1),(0,255,0),2)
                cv2.rectangle(roi_color,(ex2,ey2),(ex2+ew2,ey2+eh2),(0,255,0),2)
            #else:
                #cv2.rectangle(roi_color,(ex1,ey1),(ex1+ew1,ey1+eh1),(0,255,0),2)
            i = i+1
        nose = nose_cascade.detectMultiScale(roi_gray)
        for (nx,ny,nw,nh) in nose:
           cv2.rectangle(roi_color,(nx,ny),(nx+nw,ny+nh),(0,255,0),2)
        mouth = mouth_cascade.detectMultiScale(roi_gray)
        for (mx,my,mw,mh) in mouth:
            if (my > ny+(nh/2)):
                cv2.rectangle(roi_color,(mx,my),(mx+mw,my+mh),(0,255,0),2)
    cv2.imshow('img',img)
    if cv2.waitKey(10) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
Output:


Sunday, 25 January 2015

Background Subtraction Tracking

Below code is implementation of the code in below link using opencv 3
https://bitbucket.org/ElissonMichael/tcc_implementacao/raw/595d58fe68e4b8ee49b211c904e28be1533c8efc/Background/NoBackgroundCam.py

Corresponding video at
https://www.youtube.com/watch?v=KRKKektCcok

import numpy as np
import cv2
maiorArea = 0
cap = cv2.VideoCapture(0)

if not(cap.isOpened()):
    cap.open()

while(cap.isOpened()):
    ret, frame = cap.read()
    cv2.imshow("Webcam", frame)
    bkg=frame.copy()
    fundo = cv2.GaussianBlur(bkg,(3,3),0)
    print("OK")
    if cv2.waitKey(1) == 32:
        cv2.destroyWindow("Webcam")
        break
       
while True:
    ret, imagem = cap.read()
    mascara=imagem.copy()
    cinza=imagem.copy()
    #cv2.imshow("Webcam", imagem)
    imagem = cv2.GaussianBlur(imagem,(3,3),0)
    cv2.absdiff(imagem,fundo,mascara)
    gray = cv2.cvtColor(mascara, cv2.COLOR_BGR2GRAY)
    ret,thresh1 = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
    kernel = np.ones((3,3),np.uint8)
    dilated = cv2.dilate(thresh1,kernel,iterations = 18)
    cinza = cv2.erode(dilated,kernel,iterations = 10)
    _,contorno,heir=cv2.findContours(cinza,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
    for cnt in contorno:
        vertices_do_retangulo = cv2.boundingRect(cnt)
        if (cv2.contourArea(cnt)> maiorArea):
            maiorArea = cv2.contourArea(cnt)
            retangulo_de_interesse = vertices_do_retangulo
           
        ponto1 = (retangulo_de_interesse[0], retangulo_de_interesse[1])
        ponto2 = (retangulo_de_interesse[0] + retangulo_de_interesse[2], retangulo_de_interesse[1] + retangulo_de_interesse[3])
        cv2.rectangle(imagem, ponto1, ponto2,(0,0,0), 2)
        cv2.rectangle(cinza, ponto1, ponto2, (255,255,255), 1)
        largura = ponto2[0] - ponto1[0]
        altura = ponto2[1] - ponto1[1]
        cv2.line(cinza,(ponto1[0]+largura/2,ponto1[1]),(ponto1[0]+largura/2,ponto2[1]),(255,255,255), 1)
        cv2.line(cinza,(ponto1[0],ponto1[1]+altura/2),(ponto2[0],ponto1[1]+altura/2), (255,255,255), 1)

    cv2.imshow("Mascara", mascara)
    cv2.imshow("Cinza", cinza)
   
    cv2.imshow("Webcam", imagem)
    cv2.imshow("Dilated", thresh1)
    #cv2.imshow("Fundo", dilated)
    if cv2.waitKey(1) & 0xFF == ord('q'):
            break
   

# Release everything if job is finished
cap.release()
cv2.destroyAllWindows()



OR

for cnt in contorno:
        vertices_do_retangulo = cv2.minAreaRect(cnt)
        if (cv2.contourArea(cnt)> maiorArea):
            maiorArea = cv2.contourArea(cnt)
            retangulo_de_interesse = cv2.boxPoints(vertices_do_retangulo)
            retangulo_de_interesse = np.int0(retangulo_de_interesse)
            cv2.drawContours(imagem,[retangulo_de_interesse],-1,(0,0,0), 2)
            cv2.drawContours(cinza,[retangulo_de_interesse],-1,(255,255,255), 1)

OR

import numpy as np
import cv2
maiorArea = 0
cap = cv2.VideoCapture(0)

if not(cap.isOpened()):
    cap.open()

while(cap.isOpened()):
    ret, frame = cap.read()
    cv2.imshow("Webcam", frame)
    bkg=frame.copy()
    fundo = cv2.blur(bkg,(5,5))
    print("OK")
    if cv2.waitKey(1) == 32:       
        break
       
while True:
    ret, imagem = cap.read()
    mascara=imagem.copy()
    cinza=imagem.copy()
    #cv2.imshow("Webcam", imagem)
    imagem = cv2.blur(imagem,(5,5))
    cv2.absdiff(imagem,fundo,mascara)
    gray = cv2.cvtColor(mascara, cv2.COLOR_BGR2GRAY)
    ret,thresh1 = cv2.threshold(gray,100,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
    kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
    cinza = cv2.morphologyEx(thresh1, cv2.MORPH_CLOSE, kernel)
    cinza = cv2.blur(cinza, (9,9))
    _,contorno,heir=cv2.findContours(cinza,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
    for cnt in contorno:
        vertices_do_retangulo = cv2.boundingRect(cnt)
        if (cv2.contourArea(cnt)> maiorArea):
            maiorArea = cv2.contourArea(cnt)
            retangulo_de_interesse = vertices_do_retangulo
           
        ponto1 = (retangulo_de_interesse[0], retangulo_de_interesse[1])
        ponto2 = (retangulo_de_interesse[0] + retangulo_de_interesse[2], retangulo_de_interesse[1] + retangulo_de_interesse[3])
        cv2.rectangle(imagem, ponto1, ponto2,(0,0,0), 2)
        cv2.rectangle(cinza, ponto1, ponto2, (255,255,255), 1)
        largura = ponto2[0] - ponto1[0]
        altura = ponto2[1] - ponto1[1]
        cv2.line(cinza,(ponto1[0]+largura/2,ponto1[1]),(ponto1[0]+largura/2,ponto2[1]),(255,255,255), 1)
        cv2.line(cinza,(ponto1[0],ponto1[1]+altura/2),(ponto2[0],ponto1[1]+altura/2), (255,255,255), 1)

       
 
    cv2.imshow("Mascara", mascara)
    cv2.imshow("Cinza", cinza)
   
    cv2.imshow("Webcam", imagem)
    #cv2.imshow("Thresholded", thresh1)
    #cv2.imshow("Fundo", fundo)
    if cv2.waitKey(1) & 0xFF == ord('q'):
            break
   

# Release everything if job is finished
cap.release()
cv2.destroyAllWindows()




Tuesday, 20 January 2015

neural network in matlab 2014

Each row in Training belongs to one training data.
Each row in GroupTrain has the class of the corresponding training data.

In neural network, the model is trained using training data.

%classification using TreeBagger
model1 = TreeBagger(100,Training,GroupTrain,    'nprint',10);
disp(model1);

%How good the model is classifying the training data can tested by the following command

[tumortypeTrainPred, tumortypeTrainPredScores] = predict(model1,Training);

%to check the prediction of the model with just one instance that is 25th training data.
[tumortypeTrainPred, tumortypeTrainPredScores] = predict(model1,Training(25,:));
========================================================
%some demos in matlab 2014
edit classify_wine_demo
edit cancerdetectdemonnet
edit appcr1


One can watch the below video if using matlab 2010b using nntool
https://www.youtube.com/watch?v=2Z4959acjKs

and
http://in.mathworks.com/products/neural-network/features.html#data-fitting%2C-clustering%2C-and-pattern-recognition

In MATLAB 2014, one can make use of the following APPS found under APPS tab
neural net clustering
neural net fitting
neural net pattern recognition 
========================================================
Some more examples
========================================================
ldaClass = classify(Training(:,1:2),Training(:,1:2),GroupTrain);
N=size(GroupTrain,1);
bad = ~strcmp(ldaClass,GroupTrain);
ldaResubErr = sum(bad) / N
[ldaResubCM,grpOrder] = confusionmat(GroupTrain,ldaClass)

========================================================
clear all;
close all;
load('net.mat');
% Create a Pattern Recognition Network
hiddenLayerSize = 200;
net = patternnet(hiddenLayerSize);


% Set up Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;


% Train the Network
[net,tr] = train(net,inputs,targets);

% Test the Network
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)


% View the Network
view(net);
%save the network
save(net);
%To do prediction, you can use sim(net, inputs)

%To Test the network
testX = inputs(:,tr.testInd);
testT = targets(:,tr.testInd);

testY = net(testX);
testIndices = vec2ind(testY);
=======================================

Friday, 16 January 2015

Principal Component Analysis Well Explained With an Example in MATLAB


1
3
5
4

Consider an image as shown above and the image is made as first column elements as shown below

X =    [1     2     4     3     5     9     4     2;
           5     4     7     4     3     2     1     3;
           3     2     4     5     6     2     1     2;
           4     1     3     2     2     1     3     4]

First column represents one feature vectorand has four dimensions and there are 8 feature vectors.
Here, dimensions < number of feature vectors.
[vecs,val]=eigs(X*X',2);% X*X' gives matrix of size 4x4 as 4x8*8x4 . Only first two largest eigen vectors are considered as eigs(X*X',2)
%So,  instead of storing 4 vectors, you store 2 vectors
wt1=vecs'*X(:,1);
reconstructed =vecs*wt1; % approximate of the original
wt2=vecs'*X(:,2);
reconstructed =vecs*wt2;

For example: if you have 4 feature vector and each feature has 8 dimensions as shown below

X =[     1     5     3     4;
            2     4     2     1;
            4     7     4     3;
            3     4     5     2;
            5     3     6     2;
            9     2     2     1;
            4     1     1     3;
            2     3     2     4];
[vecs,val]=eigs(X'*X,2);%This is done to simplify computation as X*X' gives matrix of size 8x8. This is the trick used by Pentland and Turk
ef=X*vecs;  % in order to get eigen vectors of X*X', we have to multiply X with eigen vectors of X'*X..
for i=1:size(ef,2)
ef(:,i)=ef(:,i)./norm(ef(:,i)); % u divide each shape by its norm
end
wt1=ef'*X(:,1); % Each shape of 8 dimensions is now represented in 4 dimensions as 2 eigen vectors considered
reconstructed =ef*wt1; % you  get first shape back
wt2=ef'*X(:,2);
reconstructed =vecs*wt2;


You  can get back the image

Friday, 9 January 2015

Horizontal and vertical Flip using opencv and python

import numpy as np
import cv2

img=cv2.imread('1.png')
rimg=img.copy()
fimg=img.copy()
rimg=cv2.flip(img,1)
fimg=cv2.flip(img,0)
cv2.imshow("Original", img)
cv2.imshow("vertical flip", rimg)
cv2.imshow("horizontal flip", fimg)
cv2.waitKey(0)
cv2.destroyAllWindows()