1
votes

I'm trying to blur QImage alpha channel. My current implementation use deprecated 'alphaChannel' method and works slow.

QImage blurImage(const QImage & image, double radius)
{
  QImage newImage = image.convertToFormat(QImage::Format_ARGB32);

  QImage alpha = newImage.alphaChannel();
  QImage blurredAlpha = alpha;
  for (int x = 0; x < alpha.width(); x++)
  {
    for (int y = 0; y < alpha.height(); y++)
    {
      uint color = calculateAverageAlpha(x, y, alpha, radius);
      blurredAlpha.setPixel(x, y, color);
    }
  }
  newImage.setAlphaChannel(blurredAlpha);

  return newImage;
}

I was also trying to implement it using QGraphicsBlurEffect, but it doesn't affect alpha.

What is proper way to blur QImage alpha channel?

1
maybe worthwhile to use opencv for image processing.UmNyobe

1 Answers

0
votes

I have faced a similar question about pixel read\write access :

  1. Invert your loops. An image is laid out in memory as a succession of rows. So you should access first by height then by width
  2. Use QImage::scanline to access data, rather than expensives QImage::pixel and QImage::setPixel. Pixels in a scan (aka row) are guaranteed to be consecutive.

Your code will look like :

for (int ii = 0; ii < image.height(); ii++) {
    uchar* scan = image.scanLine(ii);
    int depth =4;
    for (int jj = 0; jj < image.width(); jj++) {
        //it is in fact an rgba
        QRgb* rgbpixel = reinterpret_cast<QRgb*>(scan + jj*depth);
        QColor color(*rgbpixel);
        int alpha = calculateAverageAlpha(ii, jj, color, image);
        color.setAlpha(alpha);

        //write
        *rgbpixel = color.rgba();
    }
}

You can go further and optimize the computation of the alpha average. Lets look at the sum of pixel in a radius. The sum of alpha value at (x,y) in the radius is s(x,y). When you move one pixel in either direction, a single line is added while a single line is removed. lets say you move horizontally. if l(x,y) is the sum of the vertical line of length 2*radius centered around (x,y), you have

  s(x + 1, y) = s(x, y) + l(x + r + 1, y) - l(x - r, y)

Which allow you to efficiently compute a matrix of sum (then average, by dividing with the number of pixel) in a first pass.

I suspect this kind of optimization is already implemented in a much better way in libraries such as opencv. So I would encourage you to use existing opencv functions if you wish to save time.