![](https://secure.gravatar.com/avatar/cd17dd4027b39b6be66b833771c27352.jpg?s=120&d=mm&r=g)
Hi ppl,
It's no surprise that filters can make Inkscape crawl, and I ran into that while digging into this bug
https://bugs.launchpad.net/inkscape/+bug/827192
In this case though, the area to be filtered measures 18416 x 17524 pixels, which is a bit too much for just on screen display. Is there any way to work around this? Can't we set an upper limit on the area, in order to keep Inkscape somewhat responsive? If it takes a few seconds that's OK IMHO in such extreme cases, but at least the user should be able to trust that Inkscape will recover.
Diederik
![](https://secure.gravatar.com/avatar/b47d036b8f12e712f4960ba78404c3b2.jpg?s=120&d=mm&r=g)
2014-07-06 9:16 GMT+02:00 Diederik van Lierop <mail@...1689...>:
Hi ppl,
It's no surprise that filters can make Inkscape crawl, and I ran into that while digging into this bug
https://bugs.launchpad.net/inkscape/+bug/827192
In this case though, the area to be filtered measures 18416 x 17524 pixels, which is a bit too much for just on screen display. Is there any way to work around this? Can't we set an upper limit on the area, in order to keep Inkscape somewhat responsive? If it takes a few seconds that's OK IMHO in such extreme cases, but at least the user should be able to trust that Inkscape will recover.
The problem is that the dependent area of the filter effect in this case is extremely large, i.e. the Gaussian blur must be computed on this enormous surface in order for the result to be correct.
We could do three things a) put in some logic to arbitrarily stop rendering an effect when the dependent area crosses some threshold b) use stacked box blur instead of true Gaussian blur (though I doubt it would be faster) c) speed up the existing Gaussian blur implementation - it shouldn't be that slow
Regards, Krzysztof
![](https://secure.gravatar.com/avatar/dc940f48c5635785f32941f1fbe6b601.jpg?s=120&d=mm&r=g)
On Sun, Jul 6, 2014, at 01:05 PM, Krzysztof Kosiński wrote:
The problem is that the dependent area of the filter effect in this case is extremely large, i.e. the Gaussian blur must be computed on this enormous surface in order for the result to be correct.
We could do three things a) put in some logic to arbitrarily stop rendering an effect when the dependent area crosses some threshold b) use stacked box blur instead of true Gaussian blur (though I doubt it would be faster) c) speed up the existing Gaussian blur implementation - it shouldn't be that slow
What about tiling the filtering? At some point to address 'c' we should be looking more into breaking up to sections. However I'm not following that code at the momentenough to know if we're getting close to that point yet or not.
In general focus on tuning that one operation is probably the first step. Once we get up to leveraging multi-core processing better then we can re-address the tiling point.
![](https://secure.gravatar.com/avatar/b47d036b8f12e712f4960ba78404c3b2.jpg?s=120&d=mm&r=g)
2014-07-07 2:32 GMT+02:00 Jon A. Cruz <jon@...18...>:
What about tiling the filtering? At some point to address 'c' we should be looking more into breaking up to sections. However I'm not following that code at the momentenough to know if we're getting close to that point yet or not.
The filtering is already broken into tiles. However, when the Gaussian blur radius is for example 1000 screen pixels, then every pixel of the output depends on a 2000x2000 px area of the input. There is no way around that.
In fact, when the dependency radius is larger than the tile size and the filter is complex, tiling actually harms performance, because the same areas of filter primitives and the source graphic are recomputed for many tiles.
Regards, Krzysztof
![](https://secure.gravatar.com/avatar/283b2540e7846a3831b030a05daf968e.jpg?s=120&d=mm&r=g)
On Mon, Jul 07, 2014 at 03:26:10AM +0200, Krzysztof Kosiński wrote:
2014-07-07 2:32 GMT+02:00 Jon A. Cruz <jon@...18...>:
What about tiling the filtering? At some point to address 'c' we should be looking more into breaking up to sections. However I'm not following that code at the momentenough to know if we're getting close to that point yet or not.
The filtering is already broken into tiles. However, when the Gaussian blur radius is for example 1000 screen pixels, then every pixel of the output depends on a 2000x2000 px area of the input. There is no way around that.
Actually there is, you can do it with short-time fourier transforms, which compute a large scale component just once and share it across all the tiles, then only compute the information around each tile at the local scale. There are similar tricks built around the fact that a large scale binomial distribution is the same as a gaussian.
It might even be worth implementing if this is a common problem. But I would look at approximations for the local area before spending time on that.
njh
![](https://secure.gravatar.com/avatar/dc940f48c5635785f32941f1fbe6b601.jpg?s=120&d=mm&r=g)
On Sun, Jul 6, 2014, at 10:03 PM, Nathan Hurst wrote:
Actually there is, you can do it with short-time fourier transforms, which compute a large scale component just once and share it across all the tiles, then only compute the information around each tile at the local scale. There are similar tricks built around the fact that a large scale binomial distribution is the same as a gaussian.
It might even be worth implementing if this is a common problem. But I would look at approximations for the local area before spending time on that.
Yes, I thought there were interesting approaches that could be taken here... it's just been a while since I delved into those depths. Also the manner in which Inkscape tiles things may not necessarily be the best one for rendering. I recall some 'interesting' subdividing last time I was working on multi-monitor support.
However, the main thing to focus on is to be sure that we prioritize the efforts that will pay off sooner. I'm sure there are some areas with relatively low-hanging fruit. An "I give up" option for large and/or zoomed areas is a quick UI implementation trick that can cover a lot of the bases, especially since we are focused on editing rather than viewing.
![](https://secure.gravatar.com/avatar/82c0f6eed0ee59676eb45aadd66dac57.jpg?s=120&d=mm&r=g)
On 07/06/2014 10:05 PM, Krzysztof Kosiński wrote:
... We could do three things a) put in some logic to arbitrarily stop rendering an effect when the dependent area crosses some threshold b) use stacked box blur instead of true Gaussian blur (though I doubt it would be faster) c) speed up the existing Gaussian blur implementation - it shouldn't be that slow
The stacked box blur would be unacceptably wrong for export purposes (in contrast to what seems to be popular belief, you most definitely can see the difference between most of the simple approximations and a true Gaussian blur), but for viewing it might not be the worst possible idea.
As for speeding up the existing implementation, the current algorithms is probably /the/ fastest high quality(!) implementation around (at the time I had a good look around to see what was there, and most programs either used bad approximations or a simple convolution...). Having said that, it turns out that this particular algorithm, though superfast in theory (something like 7 flops per pixel), does need quite a bit of tweaks to make it behave properly, affecting its speed quite badly (and also making use of SSE and the like a little harder).
So while I believe it might be hard to speed up the current implementation significantly, there is definitely room for a faster low quality algorithm that could be used for viewing purposes.
We could of course also use a GPU or whatever, but that's also not entirely trivial.
(BTW, I should still have some attempts at replacing the current algorithm lying around, and I have a few weeks off soon, so perhaps it's time to dust that work off...)
![](https://secure.gravatar.com/avatar/dd79268d7bfc74b6718c0cf2055ed2d3.jpg?s=120&d=mm&r=g)
Am 07.07.2014 10:18, schrieb Jasper van de Gronde:
The stacked box blur would be unacceptably wrong for export purposes (in contrast to what seems to be popular belief, you most definitely can see the difference between most of the simple approximations and a true Gaussian blur), but for viewing it might not be the worst possible idea. As for speeding up the existing implementation, the current algorithms is probably /the/ fastest high quality(!) implementation around (at the time I had a good look around to see what was there, and most programs either used bad approximations or a simple convolution...). Having said that, it turns out that this particular algorithm, though superfast in theory (something like 7 flops per pixel), does need quite a bit of tweaks to make it behave properly, affecting its speed quite badly (and also making use of SSE and the like a little harder). So while I believe it might be hard to speed up the current implementation significantly, there is definitely room for a faster low quality algorithm that could be used for viewing purposes. We could of course also use a GPU or whatever, but that's also not entirely trivial. (BTW, I should still have some attempts at replacing the current algorithm lying around, and I have a few weeks off soon, so perhaps it's time to dust that work off...)
I have two approaches in mind that could have great potential in such cases, given that we have only 256 colors per channel and don't expect the highest quality - but usable performance.
1) A randomly sampled blur could be much more efficient in such cases. Taking the median of high enough number of pseudo random samples could be a good way if we have very large blur areas. It would result in some noise, but should have constant speed at all zoom levels. (Comparable to depth blur in unbiased renderers)
2) A Gaussian blur that works on a lower resolution (1/2^n) source depending on the effective blur radius. (Comparable to Mipmapping)
Greeting from Tobias Oelgarte
![](https://secure.gravatar.com/avatar/283b2540e7846a3831b030a05daf968e.jpg?s=120&d=mm&r=g)
On Mon, Jul 07, 2014 at 06:17:11PM +0200, Tobias Oelgarte wrote:
- A randomly sampled blur could be much more efficient in such cases.
Taking the median of high enough number of pseudo random samples could be a good way if we have very large blur areas. It would result in some noise, but should have constant speed at all zoom levels. (Comparable to depth blur in unbiased renderers)
Neat idea. This would have pretty bad cache coherence though, I think. Also, why median rather than mean?
- A Gaussian blur that works on a lower resolution (1/2^n) source
depending on the effective blur radius. (Comparable to Mipmapping)
Without care you'll end up with annoying box artifacts.
njh
![](https://secure.gravatar.com/avatar/dd79268d7bfc74b6718c0cf2055ed2d3.jpg?s=120&d=mm&r=g)
Am 07.07.2014 19:29, schrieb Nathan Hurst:
On Mon, Jul 07, 2014 at 06:17:11PM +0200, Tobias Oelgarte wrote:
- A randomly sampled blur could be much more efficient in such cases.
Taking the median of high enough number of pseudo random samples could be a good way if we have very large blur areas. It would result in some noise, but should have constant speed at all zoom levels. (Comparable to depth blur in unbiased renderers)
Neat idea. This would have pretty bad cache coherence though, I think. Also, why median rather than mean?
Sorry. I meant the mean.
- A Gaussian blur that works on a lower resolution (1/2^n) source
depending on the effective blur radius. (Comparable to Mipmapping)
Without care you'll end up with annoying box artifacts.
njh
The same is true for mipmapping. You just have to avoid to scale it down one step to early.
Tobias Oelgarte
![](https://secure.gravatar.com/avatar/bab23b4e3084f3f61f92996608418a43.jpg?s=120&d=mm&r=g)
Krzysztof Kosiński <tweenk.pl@...360...> writes:
We could do three things a) put in some logic to arbitrarily stop rendering an effect when the dependent area crosses some threshold b) use stacked box blur instead of true Gaussian blur (though I doubt it would be faster) c) speed up the existing Gaussian blur implementation - it shouldn't be that slow
Regards, Krzysztof
If it's OK to contribute my two cents as a user, I would prefer loss of quality in the work area over a slowdown. If for example zooming in to a blurred shape would make large chunky pixels visible, that is still better than having to contend with a slow system. So if you have a way of replacing the blurred object with a static bitmap instead of having to recalculate the blur every time the display changes, that would be acceptable. I'm talking about the work area display only, of course, not bitmap export.
BTW, I've been working with Inkscape in a production environment for years but lately I've upped my usage even more and I'm using it almost all day. Even though I have the latest version of Illustrator installed I still vastly prefer Inkscape - it is so much easier to work with.
participants (7)
-
Diederik van Lierop
-
Jasper van de Gronde
-
Jon A. Cruz
-
Krzysztof Kosiński
-
Michael Grosberg
-
Nathan Hurst
-
Tobias Oelgarte