1
votes

This question has been posted a lot, but I've read through the questions posted on here and built my own implementation awhile ago. I ditched the project and decided to give it another shot just now after digging through some old projects and coming across it again. However, I still can't figure out what's wrong with this implementation of the smooth coloring algorithm.

However, I have an error (possibly?) for the smooth banding and the coloring of the "image" (tkinter canvas) in general. I want to say that this is just a result of the RGB palette I have defined, and the depth to which I am going but I am not a 100% positive about that. It appears to be smoothly banding for the most part, more on this later. I am going to try to make this as minimal as I can.

I have defined the following RGB palette (137 colors) here and this is the result I get at a fairly decent zoom level and going to a depth of 5,000 when computing the set.

at a fairly decent zoom level

Now, here's where it looks weird.This is after one or two zoom levels, and I want to say that this is just a result of there being so many differently colored pixels to keep an optimistic outlook. It looks like at zoom levels that are further out as well, although less prominent naturally.

zoomed out

Here is the relevant function(s) where I am recursively computing the set and coloring it

def mandel(self, x, y, z, iteration):

    mod_z = math.sqrt((z.real * z.real) + (z.imag * z.imag))

    #If its not in the set or we have reached the maximum depth
    if  mod_z >= 100.0 or iteration == DEPTH:
        if iteration < DEPTH:
            if iteration > MAX_COLOR:
                iteration = iteration % MAX_COLOR
            nu = math.log2(math.log2(abs(mod_z)) / math.log2(2)) / math.log2(2)
            value = iteration + 5 - nu
            print(iteration)
            color_1 = RGB_PALETTE[math.floor(value)]
            color_2 = RGB_PALETTE[math.floor(value) + 1]
            fractional_iteration = value % 1

            color = self.linear_interp(color_1, color_2, fractional_iteration)

            self.canvas.create_line(x, y, x + 1, y + 1,
                                    fill = self.rgb_to_hex(color))

    else:

        z = (z * z) + self.c
        self.mandel(x, y, z, iteration + 1)

    return z

def create_image(self):
    begin = time.time() #For computing how long it took (start time)
    diam_a = self.max_a - self.min_a
    diam_b = self.max_b - self.min_b
    for y in range(HEIGHT):
        for x in range(WIDTH):
            self.c =  complex(x * (diam_a / WIDTH) + self.min_a, 
                              y * (diam_b / HEIGHT) + self.min_b)

            constant = 1.0
            z = self.c

            bound = 1 / 4
            q = (self.c.real - bound)**2 + (self.c.imag * self.c.imag)
            x1 = self.c.real
            y2 = self.c.imag * self.c.imag
            sixteenth = 1 / 16

            #See if its in p2-bulb for different coloring schema between the main bulb and other bulbs of the set
            if not (q*(q + (x1 - bound)) < y2 / (constant * 4) or 
                    (x1 + constant)**2 + y2 < sixteenth): 
                #value of the recursive call while it is not in the set
                z = self.mandel(x, y, z, iteration = 0)

            if self.progress_bar != None:
                self.progress_bar["value"] = y

    self.canvas.update() #Update the progress bar
    print("Took %s Minutes to Render" % ((time.time() - begin) / MINUTE))

If the above isn't enough the full code that pertains to computing and drawing the set is posted here

1
The smoothing can only have an effect where the colour gradient between adjacent pixels is low. Where the function is very ragged with high gradients, the smoothing doen't show: For example, if you determine a "smooth" beige colour, the result will look noisy when the colour values of the adjacent pixels, smoothed or not, are reds and cyans. The images in the Wikipedia article that compare banded and smoothed rendereing also have noisy parts in the branches. - M Oehm
You could probably adjust the colour palette and its limits, but I think that you can optimize it only for some parts of the image, just like you can only chose a good focus for parts of a photograph. - M Oehm
That makes sense, I've been playing around with this for awhile trying to get better results and this seems to be basically as good as it gets at least when trying to implement this specific smooth coloring assuming I have indeed implemented it correctly, which I have to admit is slightly disappointing as I get a better coloring with a basic linear interpolation. How would I go about adjusting the palette and it's limits? I have very little experience with color gradients and manipulating them. - user4746908
Well, for example instead of RGB[math.floor(value)] you could use RGB[math.floor(c * value + v0) % NRGB] and experiment with the factor c and the offset v0. The modulo % NRGB should bring the index into an allowable range should it go out of bounds. ´NRGB` is the length of the colour palette. - M Oehm

1 Answers

1
votes

What you're trying to eliminate is aliasing. So the solution is (obviously) anti-aliasing. An effective solution for fractals is adaptive supersampling. See this adaptively super sampled mandelbrot sequence. Supersampling just means that every pixel in your output is an average of a few samples in the area of the pixel (remember your pixels are not infinitely small points - they have area). There are different patterns that can be used that have different trade-offs between aesthetics, run-time and implementation complexity.

Fractals such as mandelbrot or julia sets have a property that makes adaptive supersampling a good choice: There are big areas where the values don't change very much (and the extra computation would be wasted), and other areas where the values change a lot. They require lots of samples to be taken in the bits of the picture where it's changing a lot (i.e. the noisy bits in your images). There are a few methods you could use to determine what needs heavy sampling. But two easy to think about methods are:

  • Multiple passes over the image
    • On each successive pass, if a pixel's color varies against it's neighbors by more than a threshold, perform further sampling in the area of that pixel and average out the values
  • Always do a few samples for every pixel (say, 4)
    • Do more if those samples vary greatly
    • This will waste a computation in areas where the values are relatively similar