Am 07.07.2014 10:18, schrieb Jasper van de Gronde:
The stacked box blur would be unacceptably wrong for export purposes (in contrast to what seems to be popular belief, you most definitely can see the difference between most of the simple approximations and a true Gaussian blur), but for viewing it might not be the worst possible idea. As for speeding up the existing implementation, the current algorithms is probably /the/ fastest high quality(!) implementation around (at the time I had a good look around to see what was there, and most programs either used bad approximations or a simple convolution...). Having said that, it turns out that this particular algorithm, though superfast in theory (something like 7 flops per pixel), does need quite a bit of tweaks to make it behave properly, affecting its speed quite badly (and also making use of SSE and the like a little harder). So while I believe it might be hard to speed up the current implementation significantly, there is definitely room for a faster low quality algorithm that could be used for viewing purposes. We could of course also use a GPU or whatever, but that's also not entirely trivial. (BTW, I should still have some attempts at replacing the current algorithm lying around, and I have a few weeks off soon, so perhaps it's time to dust that work off...)
I have two approaches in mind that could have great potential in such cases, given that we have only 256 colors per channel and don't expect the highest quality - but usable performance.
1) A randomly sampled blur could be much more efficient in such cases. Taking the median of high enough number of pseudo random samples could be a good way if we have very large blur areas. It would result in some noise, but should have constant speed at all zoom levels. (Comparable to depth blur in unbiased renderers)
2) A Gaussian blur that works on a lower resolution (1/2^n) source depending on the effective blur radius. (Comparable to Mipmapping)
Greeting from Tobias Oelgarte