Rhonda Software

Highest quality full cycle software development.

Expert in areas of Computer Vision, Multimedia, Messaging, Networking and others. Focused on embedded software development. Competent in building cross-platform solutions and distributed SW systems.

Offer standalone custom solutions as well as integration of existing products. Opened for outsourcing services.

Visit us at: http://www.rhondasoftware.com

OpenCV vs. Apple iPhone

Posted on : 02-04-2009 | By : rhondasw | In : OpenCV

21

This time OpenCV was ported to the Apple iPhone platform.

First of all we need to compile OpenCV library itself so that it can be used on the iPhone. There are two ways here:

1. Use OpenCV as a private framework.
2. Compile OpenCV as a static library.

First approach looks more comfortable for using, though I was not able to make it work properly on the iPhone (it works fine on the simulator, but not on the real hardware).

But anyway, let’s see how both approaches can be followed.

1. Private framework

Instructions on how to build universal OpenCV framework for simulator and iPhone (this will support i686 and ARM) can be found here.

To add this framework to your application do following:

1. Create new application in the Xcode.
2. Right-click framework group and select “Add -> Existing Frameworks”
3. Select OpenCV.framework folder you have created.
4. In the Xcode menu select “Project -> New Build Phase -> New Copy Files Build Phase”
5. In the opened window select “Frameworks” as destination and close the window.
6. Now expand group “Targets -> your_target” and drug OpenCV.framework from the Frameworks group to the “Copy Files” group under your target.
7. Add “#import <OpenCV/OpenCV.h>” anywhere in your code (not anywhere, but… well…. you know…)
8. You will probably have to change type of your sources – change extension of the source files where you use OpenCV APIs from “.m” to “.mm”

Now you should be able to use OpenCV routines in your application. But again, for me it was working perfectly on the simulator. But on the iPhone application was crashing right after start. I’ll investigate this and add an update later.

2. Static library

This approach is less convenient in using, but it works on both simulator and hardware. So, let’s start.

1. To create static library follow these instructions.
2. Now, when you have five *.a files, to make you life easier, put libraries in the separate folder. Then walk through the sources of OpenCV (folders cv, cvaux, cxvore etc.) and copy all the header files to the separate location. Thus, you will have folder (let’s call it “OpenCV.lib”) with all *.a files and subfolder (let say “hdrs”), which contains all header files.
3. Go ahead and create new application in the XCode.
4. Add all the OpenCV header files to you project – right click “Classes” group and select “Add -> Existing Files” and double-click “/…/OpenCV.lib/hdrs” folder you created on step 2.
5. Somewhere in your code include files cv.h, ml.h and highgui.h.
6. Now double click your target (under “Targets” group) and go to the “Build” tab.
7. In the “Linking” section find option “Other Linker Flags” and add paths to your OpenCV library. This field should look like this: “/…/OpenCV.lib/libcv.a /…/OpenCV.lib/libcvaux.a ” and so on.
8. Ok, you are now ready to go!
9. No, stop. Don’t forget to add libstdc++ library to you project. Other wise you’ll face compilation issues.
10. Well now you are ready.

Few useful notes.

1. OpenCV works with IplImage, while your application will require UIImage for displaying. To convert from IplImage to UIImage you can use following function (thanks to this guy for the function):

-(CGImageRef)getCGImageFromCVImage:(IplImage*)cvImage
{
	// define used variables
	int height, width, step, channels;
	uchar *data;

	// get the with and height of the used cvImage
	height = cvImage->height;
	width = cvImage->width;
	step = cvImage->widthStep;
	channels = cvImage->nChannels;

	// create the new image with the flipped colors (BGR to RGB)
	IplImage* imgForUI = 0;
	imgForUI = cvCreateImage(cvSize(width, height), 8, 3);
	cvConvertImage(cvImage, imgForUI, CV_CVTIMG_SWAP_RB);

	// the data with the flipped colors
	data = (uchar *)imgForUI->imageData;

	// create a CFDataRef
	CFDataRef imgData = CFDataCreate(NULL, data, imgForUI->imageSize);

	// create a CGDataProvider with the CFDataRef
	CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (imgData);

	// create a CGImageRef with the CGDataProvider
	CGImageRef cgImage = CGImageCreate(width,
									   height,
									   8,
									   8*channels,
									   step,
									   CGColorSpaceCreateDeviceRGB(),
									   kCGImageAlphaNone,
									   imgDataProvider,
									   NULL,
									   NO,
									   kCGRenderingIntentDefault);

	// release the CGDataProvider
	CGDataProviderRelease(imgDataProvider);

	// return the new CGImageRef
	return cgImage;
}

Then, use CGimage you got to create UIImage which can be displayed to the user.

2. If you will try to load image using cvLoadImage on the iPhone, process it with OpenCV (e.g. try to find faces there) and then display it, all you will see is a black rectangle with some color junk at the beginning (by the way face detection will not work on this picture – 0 object will be found). This is because cvLoadImage does not work properly on the iPhone hardware for some reasons (though, as usual, everything is fine on the simulator). To cure this try to open image using APIs from iPhone SDK and then convert it to the IplImage (boris, thanks again):

- (void)manipulateOpenCVImagePixelDataWithCGImage:(CGImageRef)inImage openCVimage:(IplImage *)openCVimage
{
	// Create the bitmap context
	CGContextRef cgctx = [self createARGBBitmapContext:inImage];
	if (cgctx == NULL)
	{
		// error creating context
		return;
	}

	int height,width,step,channels;
	uchar *cvdata;
	//int i,j,k;
	int x,y;

	height = openCVimage->height;
	width = openCVimage->width;
	step = openCVimage->widthStep;
	channels = openCVimage->nChannels;
	cvdata = (uchar *)openCVimage->imageData;

	CGRect rect = {{0,0},{width,height}};

	// Draw the image to the bitmap context. Once we draw, the memory
	// allocated for the context for rendering will then contain the
	// raw image data in the specified color space.
	CGContextDrawImage(cgctx, rect, inImage);

	// Now we can get a pointer to the image data associated with the bitmap
	// context.
	unsigned char *data = (unsigned char*)CGBitmapContextGetData (cgctx);

	if (data != NULL)
	{
		//int counter = 0;
		for( y = 0; y < height; ++y )
		{
			for( x = 0; x < width; ++x )
			{
				cvdata[y*step+x*channels+0] = data[(4*y*width)+(4*x)+3];
				cvdata[y*step+x*channels+1] = data[(4*y*width)+(4*x)+2];
				cvdata[y*step+x*channels+2] = data[(4*y*width)+(4*x)+1];
			}
		}
	}

	// When finished, release the context
	CGContextRelease(cgctx);
	// Free image data memory for the context
	if (data)
	{
		free(data);
	}

}

- (CGContextRef)createARGBBitmapContext:(CGImageRef)inImage
{
	CGContextRef context = NULL;
	CGColorSpaceRef colorSpace;
	void * bitmapData;
	int bitmapByteCount;
	int bitmapBytesPerRow;

	// Get image width, height. Weíll use the entire image.
	size_t pixelsWide = CGImageGetWidth(inImage);
	size_t pixelsHigh = CGImageGetHeight(inImage);

	// Declare the number of bytes per row. Each pixel in the bitmap in this
	// example is represented by 4 bytes; 8 bits each of red, green, blue, and
	// alpha.
	bitmapBytesPerRow = (pixelsWide * 4);
	bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);

	// Use the generic RGB color space.
	colorSpace = CGColorSpaceCreateDeviceRGB();
	if (colorSpace == NULL)
	{
		return NULL;
	}

	// Allocate memory for image data. This is the destination in memory
	// where any drawing to the bitmap context will be rendered.
	bitmapData = malloc( bitmapByteCount );
	if (bitmapData == NULL)
	{
		CGColorSpaceRelease( colorSpace );
		return NULL;
	}

	// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
	// per component. Regardless of what the source image format is
	// (CMYK, Grayscale, and so on) it will be converted over to the format
	// specified here by CGBitmapContextCreate.
	context = CGBitmapContextCreate (bitmapData,
									 pixelsWide,
									 pixelsHigh,
									 8, // bits per component
									 bitmapBytesPerRow,
									 colorSpace,
									 kCGImageAlphaPremultipliedFirst);
	if (context == NULL)
	{
		free (bitmapData);
	}

	// Make sure and release colorspace before returning
	CGColorSpaceRelease( colorSpace );

	return context;
}

- (IplImage *)getCVImageFromCGImage:(CGImageRef)cgImage
{
	IplImage *newCVImage = cvCreateImage(cvSize(CGImageGetWidth(cgImage), CGImageGetHeight(cgImage)), 8, 3);

	[self manipulateOpenCVImagePixelDataWithCGImage:cgImage openCVimage:newCVImage];

	return newCVImage;
}

3. If you are going to change OpenCV itself, adding option “–enable-debug” might be useful for debugging. But note, that this will reduce performance (it might work up to 1.5 times slower). Also, it worth adding it anyway, since if you will enter some OpenCV API in the XCode debugger it might freeze or crash application. Also, if you are using OpenCV as a static library, make sure that all OpenCV headers, which are added to you project, are up-to-date. Otherwise, application might not work properly.

And now the sad part…
Performance of face detection on the iPhone is painful. Processing of VGA image (640×480) with three faces on it takes 6 to 20 seconds (depends on cvHaarDetectObjects parameters). 320×240 image is a bit faster, but still slow – 1-6 seconds.

Okay, that’s all, folks.

Comments (21)

In Igor’s blog, his demo looked pretty encouraging.. that shows that he was able to get face detection in the range of 100 milli seconds
http://zaaghad.blogspot.com/2009/02/opencv-in-action-on-iphone-simulator.html
I wonder what optimizations he did… I will post a question for him too.

Thats only on the simulator I guess.. running on intel.. so that explains the speed.

How much time does it take to find one face in lets say 320 X 240

You are absolutely right. Igor’s 100 ms is for simulator. On real hardware it’s not that good. Finding face on picture 320×240 takes from 300ms to 6 seconds depending on the parameters. But 300 ms is for search window 150×150 – this is a not good way to look for faces in a real life, since lot of faces will be missed.
Things are going better when fixed point version of Viola-Jones is used. In this case it works 2 times faster. Good, but still not perfect.

cool to port opencv to iphone.
actually facedetect alogirthm in opencv is very time consuming and complicated, why not simplify it then you can get fast algorithm on iPhone.

So good tutorial.

If there is one simple tutorial to use opencv static library on iPhone using one simple demo , that will be perfect , I think.

Thanks,

Hi Stone,

>>actually facedetect alogirthm in opencv is very time consuming and complicated, why not simplify it then you can get fast algorithm on iPhone.

Please refer to our other pages where the same problem is discussed deeply:

1) http://www.computer-vision-software.com/blog/2009/06/fastfurious-face-detection-with-opencv (super fast face detection);
2) http://www.computer-vision-software.com/blog/2009/04/fixing-opencv (face detection for fixed point processor).

I guess it will be interesting for you.

Aleksey

[…] might want to take a look at this __________________ Check my iPhone Apps: Prompt , […]

How do we add libstdc++ to our project like you specified in Step 9 of the static library instructions?

thank you sir, and thank you for your blog post on creating the framework at http://zaaghad.blogspot.com/2009/02/universal-i386arm-opencv-framework-for.html

Hello Sir,
Thanks for the explanation.But I am still confused.
I am using iPhone 3G.and I want to try out this application.Can you provide me with the sample code or the working code of the same application.I would be very much thanks full to.

Hi Hardik,

We don’t share our code on the blog as per our policy. If you want to have the code or executable file, you could discuss it with our marking team, see “about” page for details.

Aleksey

[…] Linking tips: OpenCV vs. Apple iPhone […]

I am using iPhone 3G.and I want to try out this application.Can you provide me with the sample code or the working code of the same application

Hi everyone,
OpenCv on iPhone it is possible.
First check one of the best OpenCV on iPhone documentation on the web:
http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en

I used it and i already have my team first iPhone app with OpenCV on the AppStore:
Flags&Faces
http://itunes.apple.com/app/flags-faces/id371891114?mt=8

Face detection is pretty fast, tips? well scale down images before doing de haarcascade detection, and scale up the results coordinates to the orignal image. I always work with an 480×640 img as max size for the image.
As you can see in the app face detection, face border detection, smooth and blending they all together take 2 or 3 secs.
See you soon guys!
And thanks, this site was my first hint…

[…] Vi segnalo questo link  a http://www.computer-vision-software.com per la compilazione di OpenCV, nota libreria per la ricognizione facciale:  guide to openCV compile on Iphone […]

When I run the application that shows an error message like _dyld_dyld_fatal_error. Can anyone suggest me how to handle that error. (Note:I am using Private framework)

BTW, thanks for such an excellent tutorial which explains the whole process right from integration to its implement ion within iPhone application.

But if I wish to use the same framework, to detect the corners of a Page placed on a Desk, How Can I capture that frame of Page. I mean where do I need to change within this framework ?

Howdy, i read your blog occasionally and i own a similar
one and i was just curious if you get a lot of spam feedback?
If so how do you prevent it, any plugin or anything you
can recommend? I get so much lately it’s driving me mad so any assistance is very much appreciated.

[…] good people @ computer-vision-software.com have posted a guideline on how to compile OpenCV on iPhone and link them as static libraries, and I followed it. I did have to recompile it with one change – […]

Write a comment