Blurring thumbnails: ConvolveOp gotchas

As part of my task of blurring thumbnail images, I wrote some code to use ConvolveOp. ConvolveOp is basically a filter that generates a new value for each pixel of an image by mixing the value of that pixel with the values of the pixels around it. The mixing depends on a "kernel," which is a matrix of weights. The ConvolveOp filter positions the center of the kernel over each pixel of the source image, mixes the values of that pixel and its neighboring pixels together, depending on their weights in the matrix, and then assigns the result to the pixel in the destination image. The size of the kernel determines the number of neighboring pixels that are used in the convolve operation. If the kernel is 5x5 pixels, for example, then the convolve operation will include the original pixel and those up to two pixels distant.

It seems simple, but ConvolveOp is actually the main (and sometimes only) building block in many image-processing operations. Basic blurring and sharpening can be done with ConvolveOp. The only difference between a blur filter, a gaussian blur filter, a sharpening filter, an edge-finding filter, and an embossing filter, is the values in the kernel.

However, there is one complication with doing convolves. What do you do with the edge pixels? When ConvolveOp positions the kernel at the upper-left corner of the image, for example, unless the kernel is only 1x1 pixel, then the top and left portions of the kernel will be hanging off the edge of the image. What do we do in that situation?

There are lots of different ways to handle the edges, but unfortunately, ConvolveOp only provides two, neither of which are very useful. EDGE_ZERO_FILL causes ConvolveOp to act as if the missing pixels are present and set to zero. EDGE_NO_OP tells ConvolveOp to simply do nothing.

A blur filter uses a kernel in which the all of the weights are more or less equal and total to 1. Since all the weights total to 1, the filter does not alter the overall brightness of the image. But using the default EDGE_ZERO_FILL edge strategy causes ConvolveOp to effectively mix a bunch of black pixels into the edges of the images, making them darker. That's usually not what the user expects. The user expects the image to be blurred but still have crisp edges. However, the other option, EDGE_NO_OP, just leaves the edge pixels unblurred, which is even worse.

Ideally ConvolveOp should provide a third option, EDGE_USE_NEAREST, which would use the value of the nearest image pixel to fill in the missing pixels. But there is no such option. Also, there is apparently no way to plug in a custom edge strategy. (This seems like it would be an incredibly useful feature, but I suspect that it doesn't exist because the actual convolve operation is implemented in native code.)

However, there is a way around this. Instead of relying on ConvolveOp to figure out the values if the edge pixels, just provide them yourself. This requires creating an intermediate image that is slightly larger than the original image--just large enough to include the extra pixels that you need. If your kernel is 5x5 pixels, then you need 2 additional pixels on either side of the original image, and therefore the intermediate image will be 4 pixels wider and 4 pixels taller than the original. Then, copy the original image into the center of the intermediate image, and fill in the outer pixels however you please. Run ConvolveOp on the intermediate image, then strip off the edge pixels that you added by copying the center portion (less the edge pixels) into the final destination image.

For the purposes of blurring, all I cared about was that the extra edge pixels be more or less like the edge pixels of the original image. So after creating the intermediate image, I simply scaled the original image up 4 pixels in either direction, placed it in the intermediate image, then copied the original image (at its original size) into the center of the intermediate image. That effectively made the edge pixels of the intermediate image the same as the edge pixels of the original image.

Obviously this strategy for handling edge pixels is only useful for small images. If the image was 50 megapixels, then creating another 50-megapixel buffer just to add some edge pixels would be a waste. But if you're working with 50-megapixel images, then you're probably going to want to write your own image-processing code anyway and not rely on Java 2D. For thumbnail images, creating an intermediate image is quick and easy.

Comments

Popular posts from this blog

UUIDs as primary keys

Scala and Kotlin

Generating thumbnail images: AffineTransformOp gotchas