English | 简体中文 | 繁體中文 | Русский язык | Français | Español | Português | Deutsch | 日本語 | 한국어 | Italiano | بالعربية

Method of removing the background color of images in iOS

Actual project scenario: remove the pure white background image of the picture to obtain a transparent background image for collage function

It introduces three processing methods of two approaches (don't know why I thought of Kong Yiji), the specific performance is not compared, and I would be grateful if any big guys could inform me.

Core Image Core Graphics/Quarz 2D Core Image

Core Image is a very powerful framework. It allows you to simply apply various filters to process images, such as adjusting brightness, color, or exposure. It uses GPU (or CPU) to process image data and video frames very quickly, even in real-time. It also hides all the details of the underlying graphics processing, and you can use it simply through the provided API without needing to worry about how OpenGL or OpenGL ES can fully utilize the GPU's capabilities, nor do you need to know how GCD plays a role in it. Core Image handles all the details.

in the Apple official documentationCore Image Programming Guideit mentionedChroma Key Filter RecipeFor an example of processing the background

It uses the HSV color model because the HSV model is more friendly for representing color ranges compared to RGB.

The general process of the processing process:

Create a cube map cubeMap that maps the color range to be removed, set the Alpha of the target color to 0.0f, use the CIColorCube filter and cubeMap to process the source image for color, obtain the CIImage object after CIColorCube processing, convert it to the CGImageRef object in Core Graphics, and obtain the result image through imageWithCGImage:.

Note: In the third step, you cannot directly use imageWithCIImage:, because the result is not a standard UIImage. If used directly, it may not display properly.

- (UIImage *)removeColorWithMinHueAngle:(float)minHueAngle maxHueAngle:(float)maxHueAngle image:(UIImage *)originalImage{
 CIImage *image = [CIImage imageWithCGImage:originalImage.CGImage];
 CIContext *context = [CIContext contextWithOptions:nil];// kCIContextUseSoftwareRenderer : CPURender
 /** Note
 * The UIImage initialized through CIimage is not a standard UIImage like through CGImage
 * So if rendering processing is not done with context, it is impossible to display normally
 */
 CIImage *renderBgImage = [self outputImageWithOriginalCIImage:image minHueAngle:minHueAngle maxHueAngle:maxHueAngle];
 CGImageRef renderImg = [context createCGImage:renderBgImage fromRect:image.extent];
 UIImage *renderImage = [UIImage imageWithCGImage:renderImg];
 return renderImage;
}
struct CubeMap {
 int length;
 float dimension;
 float *data;
};
- (CIImage *)outputImageWithOriginalCIImage:(CIImage *)originalImage minHueAngle:(float)minHueAngle maxHueAngle:(float)maxHueAngle{
 struct CubeMap map = createCubeMap(minHueAngle, maxHueAngle);
 const unsigned int size = 64;
 // Create memory with the cube data
 NSData *data = [NSData dataWithBytesNoCopy:map.data
   length:map.length
   freeWhenDone:YES];
 CIFilter *colorCube = [CIFilter filterWithName:@"CIColorCube"];
 [colorCube setValue:@(size) forKey:@"inputCubeDimension"];
 // Set data for cube
 [colorCube setValue:data forKey:@"inputCubeData"];
 [colorCube setValue:originalImage forKey:kCIInputImageKey];
 CIImage *result = [colorCube valueForKey:kCIOutputImageKey];
 return result;
}
struct CubeMap createCubeMap(float minHueAngle, float maxHueAngle) {
 const unsigned int size = 64;
 struct CubeMap map;
 map.length = size * size * size * sizeof (float) * 4;
 map.dimension = size;
 float *cubeData = (float *)malloc (map.length);
 float rgb[3], hsv[3], *c = cubeData;
 for (int z = 0; z < size; z++{
 rgb[2] = ((double)z)/(size-1; // Blue value
 for (int y = 0; y < size; y++{
 rgb[1] = ((double)y)/(size-1; // Green value
 for (int x = 0; x < size; x ++{
 rgb[0] = ((double)x)/(size-1; // Red value
 rgbToHSV(rgb,hsv);
 // Use the hue value to determine which to make transparent
 // The minimum and maximum hue angle depends on
 // the color you want to remove
 float alpha = (hsv[0] > minHueAngle && hsv[0] < maxHueAngle) ?63; 0.0f: 1.0f;
 // Calculate premultiplied alpha values for the cube
 c[0] = rgb[0] * alpha;
 c[1] = rgb[1] * alpha;
 c[2] = rgb[2] * alpha;
 c[3] = alpha;
 c += 4; // advance our pointer into memory for the next color value
 }
 }
 }
 map.data = cubeData;
 return map;
}

rgbToHSV is not mentioned in the official documentation, and the author found the relevant conversion processing in the big shot's blog mentioned in the following text. Thanks

void rgbToHSV(float *rgb, float *hsv) {
 float min, max, delta;
 float r = rgb[0], g = rgb[1], b = rgb[2];
 float *h = hsv, *s = hsv + 1, *v = hsv + 2;
 min = fmin(fmin(r, g), b );
 max = fmax(fmax(r, g), b );
 *v = max;
 delta = max - min;
 if( max != 0 )
 *s = delta / max;
 else {
 *s = 0;
 *h = -1;
 return;
 }
 if( r == max )
 *h = ( g - b ) / delta;
 else if( g == max )
 *h = 2 + ( b - r ) / delta;
 else
 *h = 4 + ( r - g ) / delta;
 *h *= 60;
 if( *h < 0 )
 *h += 360;
}

Next, let's try removing the green background to see how the effect is

We can useHSV tool, to determine the approximate range of the green HUE value50-170

Call this method and try it out

[[SPImageChromaFilterManager sharedManager] removeColorWithMinHueAngle:50 maxHueAngle:170 image:[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"nb" ofType:@"jpeg"]]]

Effect

The effect is still acceptable.

Students who carefully observe the HSV model may find that we are helpless when it comes to specifying grayscale black through the method of specifying the hue angle (Hue). We have to use saturation (Saturation) and brightness (Value) together to judge, and students who are interested can check the code for more information.Judging Alpha float alpha = (hsv[0] > minHueAngle && hsv[0] < maxHueAngle) &63; 0.0f: 1.0f;Try the effect there. (As for why RGB and HSV are converted in the code, please search for their conversion on Baidu, because the author is not familiar with them. Ah, the author is desperate)

If you are interested in Core Image, please refer to the series of articles by the big guys

iOS8 Core Image In Swift: Automatic Image Enhancement and the Use of Built-in Filters

iOS8 Core Image In Swift: More Complex Filters

iOS8 Core Image In Swift: Face Detection and Mosaic

iOS8 Core Image In Swift: Real-time Filters in Video

Core Graphics/Quarz 2D

As mentioned in the previous paragraph, the Core Image based on OpenGL is obviously very powerful. As another cornerstone of the view,Core GraphicsSimilarly powerful. His exploration has given the author more knowledge about images. Therefore, a summary is provided here for future reference.

If you are not interested in exploring, please skip directly to the last part of the article: Masking an Image with Color

Bitmap


In Quarz 2In the official D documentation, forThe description of BitMap is as follows:

A bitmap image (or sampled image) is an array of pixels (or samples). Each pixel represents a single point in the image. JPEG, TIFF, and PNG graphic files are examples of bitmap images.

32-bit and 16-bit pixel formats for CMYK and RGB color spaces in Quartz 2D

Returning to our needs, for removing the specified color from an image, if we can read the RGBA information on each pixel, judge their values separately, and if they meet the target range, we change its Alpha value to 0 and output it as a new image, then we have achieved a similar processing method to that in the cubeMap mentioned above.

The powerful Quarz 2D provides us with the ability to perform this operation, please see the code example below:

- (UIImage *)removeColorWithMaxR:(float)maxR minR:(float)minR maxG:(float)maxG minG:(float)minG maxB:(float)maxB minB:(float)minB image:(UIImage *)image{
 // Allocate memory
 const int imageWidth = image.size.width;
 const int imageHeight = image.size.height;
 size_t bytesPerRow = imageWidth * 4;
 uint32_t* rgbImageBuf = (uint32_t*) malloc(bytesPerRow * imageHeight);
 // Create context
 CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();// Container for color range
 CGContextRef context = CGBitmapContextCreate(rgbImageBuf, imageWidth, imageHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
 CGContextDrawImage(context, CGRectMake(0, 0, imageWidth, imageHeight), image.CGImage);
 // Traverse pixels
 int pixelNum = imageWidth * imageHeight;
 uint32_t* pCurPtr = rgbImageBuf;
 for (int i = 0; i < pixelNum; i++, pCurPtr++)
 {
 uint8_t* ptr = (uint8_t*) pCurPtr;
 if (ptr[3] >= minR && ptr[3] <= maxR &&
 ptr[2] >= minG && ptr[2] <= maxG &&
 ptr[1] >= minB && ptr[1] <= maxB) {
 ptr[0] = 0;
 } else {
 printf("\n---->ptr0:%d ptr1:%d ptr2:%d ptr3:%d<----\n", ptr[0], ptr[123]
 }
 }
 // Convert memory to image
 CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, rgbImageBuf, bytesPerRow * imageHeight, nil);
 CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight,8, 32, bytesPerRow, colorSpace,kCGImageAlphaLast |kCGBitmapByteOrder32Little, dataProvider,NULL,true,kCGRenderingIntentDefault);
 CGDataProviderRelease(dataProvider);
 UIImage* resultUIImage = [UIImage imageWithCGImage:imageRef]; 
 // Release
 CGImageRelease(imageRef);
 CGContextRelease(context);
 CGColorSpaceRelease(colorSpace);
 return resultUIImage;
}

Do you remember the disadvantages of the HSV mode mentioned in Core Image? Then Quarz 2D is directly used to process the RGBA information, which is a good way to avoid the problem of being unfriendly to black and white. We just need to set the RGB range (because black and white can be easily determined in the RGB color model), and we can roughly encapsulate it. As follows

- (UIImage *)removeWhiteColorWithImage:(UIImage *)image{
 return [self removeColorWithMaxR:255 minR:250 maxG:255 minG:240 maxB:255 minB:240 image:image];
}
- (UIImage *)removeBlackColorWithImage:(UIImage *)image{
 return [self removeColorWithMaxR:15 minR:0 maxG:15 minG:0 maxB:15 minB:0 image:image];
}

Let's take a look at the comparison of our processing effects on white backgrounds

It seems to be okay, but it is not very friendly to sheer clothing. Let's take a look at the test pictures I made

It is obvious that the effect of 'tattered clothes' is very obvious if it is not on a white background. This problem was not avoided in any of the three methods I tried. If any big shots know a good solution and can tell me, I will be very grateful. (Let's put two knees here first)

In addition to the above problems, the values read out by this method of comparing each pixel will have errors when drawn. However, this error is almost invisible to the naked eye.


As shown in the figure below, when we draw, we set the RGB values to be100/240/220 But the value read out when processing with CG above is92/241/220. The color difference between the 'new' and 'current' in the image is almost invisible. This minor problem, everyone knows, does not affect the actual desaturation effect much

Masking an Image with Color

The author tried to understand and use the previous method, and then found this method when reading the document again, which was like finding a gift from Father Apple. Let's go straight to the code

- (UIImage *)removeColorWithMaxR:(float)maxR minR:(float)minR maxG:(float)maxG minG:(float)minG maxB:(float)maxB minB:(float)minB image:(UIImage *)image{
 const CGFloat myMaskingColors[6}] = {minR, maxR, minG, maxG, minB, maxB};
 CGImageRef ref = CGImageCreateWithMaskingColors(image.CGImage, myMaskingColors);
 return [UIImage imageWithCGImage:ref];
}

Official documentation here

Summary

The HSV color mode is more conducive to removing colors from images than the RGB mode, while RGB is exactly the opposite. The author, because the project only needs to remove the white background, so finally adopted the last method.

You might also like