What is OpenCV ?
OpenCV [OpenCV] is an open source (see http://opensource.org) computer vision l ibrary available from http://SourceForge.net/projects/opencvlibrary. h e library is written in C and C++ and runs under Linux, Windows and Mac OS X. There is active development on interfaces for Python, Ruby, Matlab, and other languages. OpenCV was designed for computational efficiency and with a strong focus on realtime applications. OpenCV is written in optimized C and can take advantage of multicore processors. If you desire further automatic optimization on Intel architectures [Intel], you can buy Intel’s Integrated Performance Primitives (IPP) libraries [IPP], which consist of low-level optimized routines in many different algorithmic areas. OpenCV automatically uses the appropriate IPP library at runtime if that library is installed.
One of OpenCV’s goals is to provide a simple-to-use computer vision infrastructure that helps people build fairly sophisticated vision applications quickly. The OpenCV library contains over 500 functions that span many areas in vision, including factory product inspection, medical imaging, security, user interface, camera calibration, stereo vision, and robotics. Because computer vision and machine learning go hand-in- hand, OpenCV also contains a full, general-purpose Machine Learning Library (MLL). This sublibrary is focused on statistical pattern recognition and clustering. The MLL is highly useful for the vision tasks that are at the core of OpenCV’s mission, but it is general enough to be used for any machine learning problem.
This was an introduction of OpenCV. OpenCV has got so many usuage in image processing and has been tools for engineerings, students and researchers across the world for their researches.
Recenlty We are doing project on Image Processing “Motion Tracking and HCI” and we have based our project on OpenCV. It has reduced our complexity very much. All updates on our project will be shared here, as we always keep having our source code updated so often, and sometime have it merged with others and always have different algorithms implemented. And this will also be some starting line for some others, having same project like ours.
The objective of our project is to integrate different hand gesture to mouse movements, that’s HCI part of our project and motion tracking has been used to find the motion region and track the gestures. Though it’s not been implemented at it most, we have achieved what can be shown at demonstration at this moment. As this progress is progressive, we are implementing and testing new ideas and methodologies hours and hours after.
Our knowledge on image processing has limited us implementing OpenCV codes and modules. As we keep knowing about the modules and different techniques, we’ll update this page. For now, here is the simple code snippet that is used to capture the image from webcamera.
[cpp]
#include "cv.h"
#include "highgui.h"
int main(int argc,char** argv)
{
IplImage *frame;
// Create capture device ready
// here 0 indicates that we want to use camera at 0th index
CvCapture *capture = cvCaptureFromCAM(0);
//highgui.h is required
cvNamedWindow("capture",CV_WINDOW_AUTOSIZE);
while(1)
{
//Query for Frame From Camera
frame = cvQueryFrame(capture);
//Display the captured image
cvShowImage("capture",frame);
char ch = cvWaitKey(25); //Wait for 25 ms for user to hit key
if(ch==27) break; //Has Escape key been hit
}
//deallocate the memories occupied by images
cvReleaseImage(&frame);
cvDestroyWindow("capture");
return 0;
}
[/cpp]
Compile the above program in OpenCV and you’ll get a nice window of live video streams.
Just Put cvWaitKey(0) instead of cvWaitKey(25) and see what happens, the frame is just loaded once and no-more frame is captured. Guess Why ? That’s for you solve it.